
Within the past few years, Southeast Asia (SEA) has witnessed an unprecedented surge in online scams and fraud cases. This problem is compounded by the proliferation of scam centres across the region, often linked to human trafficking and other serious crimes. In addition, there is a growing concern around the rapid evolution of scam typologies, cross-border and multi-sectoral nature of scam attack chains, and exploitation of technologies such as Artificial Intelligence (AI).
As noted in TFGI’s latest report on Building Resilience against Digitally-enabled Scams and Fraud in Southeast Asia, this growing complexity is just one of the systemic challenges facing the region. For example, AI-generated deepfake cases increased by 1,530% across the Asia-Pacific region between 2022 and 2023, with Vietnam experiencing a 25.3% rise in deepfake fraud. These developments raise urgent questions about how AI may be developed and deployed to safeguard trust and safety in the digital ecosystem.
To discuss the potential and governance of new AI-powered approaches to fighting scamsTFGI joined a panel discussion hosted by Google on AI x Scams: Cross-industry Approaches, Solutions and Challenges at the Global Anti-Scam Summit Asia 2025.
Moderator and Panellists
- Rachel Teo, Head of Government Affairs & Public Policy, Singapore; APAC Anti-Scams & Fraud, Google
- Rahul Vatts, Chief Regulatory Officer, Bharti Airtel
- Ming Tan, Senior Fellow and Founding Executive Director, Tech for Good Institute
- Joc-Cing Tay, Head of AI Strategy, Gogolook
- Snigdha Bhardwaj, Director & Global Head of Generative AI & Search, Google Trust & Safety
Key Takeaways:
1. With emerging technologies, including AI, online scams are growing in complexity, sophistication and scale.
While unauthorised threats such as malware, phishing, and hacking are still prevalent, there is a rising issue of authorised scams. The Singapore Police Force reported that almost 80% (78.8%) of total reported scams in Singapore for the first half of 2025 were authorised. In addition, when scammers combine tactics across online and offline channels, and often leveraging AI technologies, individuals can be deceived into willingly giving consent. For instance, deepfakes of Singapore’s then-Deputy Prime Minister Lawrence Wong endorsing a suspicious investment product circulated online in 2024.
The cross-border nature of scams adds another layer of complexity. A scam may originate in one country, target victims in another, and be executed by actors based in a third country, obscuring accountability and enforcement. This growing sophistication impresses upon the need to urgently shift from reactive responses to proactive, system-wide disruption of scam networks.
The scale of the problem is also significant. The recently published State of Scams in Southeast Asia 2025 Report by Global Anti-Scam Alliance revealed that 63% of adults surveyed had encountered scam attempts within the past year, with an average loss of US$660 per victim. The report also found that 79% of adults were exposed to scams, with 11% encountering them daily. Scammers most commonly relied on phone calls (62%), SMS (56%), and instant messaging apps (49%).
2. Common definitions and shared principles build stronger cross-sector and regional resilience.
Cross-sectoral approach is needed to tackle scams and fraud especially as the nature of the issues to be cross-cutting. Several countries have provided a framework for this collaboration. For example, in the Philippines, the Anti-Financial Account Scamming Act (AFASA) established a clear coordination mechanism between the central bank and law enforcement agencies to respond swiftly on scam cases. Aside from the Philippines, India, through its Financial Fraud Risk Indicator, allows real-time sharing of threat intelligence between banks, regulators, and telecommunications providers, strengthening early detection and mitigation.
Both frameworks open doors to leveraging AI to increase its effectiveness. For instance, AI tools can support threat intelligence sharing across sectors by automating, data tagging, pattern recognition and analysis. A notable example is Bharti Airtel’s AI powered Suspected Scam Alert, which analyses internet traffic, cross-checks with global repositories and leverages its own database of threat actors to flag potential scams in real time.
Across SEA, online scams and fraud are interpreted differently across jurisdictions, creating gaps in enforcement and incident reporting. Developing a shared definition and common principles would provide a much-needed foundation for inter-jurisdiction cooperation. Standardised reporting frameworks and interoperable systems could improve detection and response across borders, enabling a safer regional digital environment.
3. Innovation and safety must go hand in hand.
While scammers increasingly exploit emerging technologies such as AI-generated voices and deepfakes, innovation also enables stronger defences. Tools such as AI-driven content tagging help identify manipulated content as well as other AI powered solutions. For instance, Google SynthID, embeds watermarks into AI generated content, allowing users to identify AI manipulated media and promoting transparency in the digital landscape. More extensively, Google just launched a multitude of features by using AI-powered solutions in defending users against scams such as removing fraudulent websites on their search engine up to AI-powered scams detection in Google messages and Phone whereby it will notify the users upon the emergence of scams threat.
While speed and identification is essential to combat scams, innovation must be guided by responsible AI frameworks. As scams evolve over time, decision-making on current data can risk false positives, biases, or unfair treatment. AI systems should be transparent and explainable, so that impacted individuals and entities can understand the decisions, with avenues for recourse if necessary.
To encourage agility while ensuring accountability and public trust in the digital ecosystem, policy approaches can evolve alongside innovation. Safety ratings and technical guidelines can encourage best practice, while regulatory sandboxes, such as Singapore’s Privacy-Enhancing Technology Sandbox, enable safe experimentation before solutions are scaled.
4. Combating scams requires a whole-of-society approach
Scams cannot be addressed by the governments or platforms alone. Community influencers and actors such as families, grassroots groups, educators, and religious leaders play an important role in raising awareness and building trust. At the systemic level, public and private stakeholders are responsible for securing digital infrastructure and maintaining trust in digital services.
AI can enhance these efforts by localising scam management strategies. A few examples of application of AI include utilising AI-powered translation tools to develop awareness campaigns material into local languages and using AI-driven verification tools to help users distinguish genuine from fraudulent content. More importantly, scam management must be tailored to community context, as one-size-fits-all approaches might not necessarily be suitable as compared to solutions rooted in local needs and context.