AI x Scams: Cross-industry Approaches, Solutions and Challenges

On September 2nd, 2025, the Tech for Good Institute (TFGI) participated in a panel discussion as part of the Global Anti-Scam Summit Asia 2025, organised by Global Anti-Scam Alliance (GASA). Conducted and led by Google, the panel discussion explored actions and approaches from a cross-industry effort, discussing the impact and challenges that AI presented as a potential enabler and solution in addressing scams.
From left to right: Rachel Teo, Head of Government Affairs & Public Policy, Singapore & APAC Anti-Scams and Fraud, Google; Rahul Vatts, Chief Regulatory Officer, Bharti Airtel; Dr. Ming Tan, Senior Fellow and Founding Executive Director, Tech for Good Institute; Joc-Cing Tay, Head of AI Strategy, Gogolook; Snigdha Bhardwaj, Director & Global Head of Generative AI & Search, Google Trust & Safety.

Within the past few years, Southeast Asia (SEA) has witnessed an unprecedented surge in online scams and fraud cases. This problem is compounded by the proliferation of scam centres across the region, often linked to human trafficking and other serious crimes. In addition, there is a growing concern around the rapid evolution of scam typologies, cross-border and multi-sectoral nature of scam attack chains, and exploitation of  technologies such as Artificial Intelligence (AI).

As noted in TFGI’s latest report on Building Resilience against Digitally-enabled Scams and Fraud in Southeast Asia, this growing complexity is just one of the systemic challenges facing the region. For example, AI-generated deepfake cases increased by 1,530% across the Asia-Pacific region between 2022 and 2023, with Vietnam experiencing a 25.3% rise in deepfake fraud. These developments raise urgent questions about how AI may be developed and deployed to safeguard trust and safety in the digital ecosystem.

To discuss the potential and governance of new AI-powered approaches to fighting scamsTFGI joined a panel discussion hosted by Google on AI x Scams: Cross-industry Approaches, Solutions and Challenges at the Global Anti-Scam Summit Asia 2025.


Moderator and Panellists

  • Rachel Teo, Head of Government Affairs & Public Policy, Singapore; APAC Anti-Scams & Fraud, Google
  • Rahul Vatts, Chief Regulatory Officer, Bharti Airtel
  • Ming Tan, Senior Fellow and Founding Executive Director, Tech for Good Institute
  • Joc-Cing Tay, Head of AI Strategy, Gogolook
  • Snigdha Bhardwaj, Director & Global Head of Generative AI & Search, Google Trust & Safety

Key Takeaways:

1. With emerging technologies, including AI, online scams are growing in complexity, sophistication and scale.

While unauthorised threats such as malware, phishing, and hacking are still prevalent, there is a rising issue of authorised scams. The Singapore Police Force reported that almost 80% (78.8%) of total reported scams in Singapore for the first half of 2025 were authorised. In addition, when scammers combine tactics across online and offline channels, and often leveraging AI technologies, individuals can be deceived into willingly giving consent. For instance, deepfakes of Singapore’s then-Deputy Prime Minister Lawrence Wong endorsing a suspicious investment product circulated online in 2024.

The cross-border nature of scams adds another layer of complexity. A scam may originate in one country, target victims in another, and be executed by actors based in a third country, obscuring accountability and enforcement. This growing sophistication impresses upon the need to urgently shift from reactive responses to proactive, system-wide disruption of scam networks.

The scale of the problem is also significant. The recently published State of Scams in Southeast Asia 2025 Report by Global Anti-Scam Alliance revealed that 63% of adults surveyed had encountered scam attempts within the past year, with an average loss of US$660 per victim. The report also found that 79% of adults were exposed to scams, with 11% encountering them daily. Scammers most commonly relied on phone calls (62%), SMS (56%), and instant messaging apps (49%).


2. Common definitions and shared principles build stronger cross-sector and regional resilience.

Cross-sectoral approach is needed to tackle scams and fraud especially as the nature of the issues to be cross-cutting. Several countries have provided a framework for this collaboration. For example, in the Philippines, the Anti-Financial Account Scamming Act (AFASA) established a clear coordination mechanism between the central bank and law enforcement agencies to respond swiftly on scam cases. Aside from the Philippines, India, through its Financial Fraud Risk Indicator, allows real-time sharing of threat intelligence between banks, regulators, and telecommunications providers, strengthening early detection and mitigation.

Both frameworks open doors to leveraging AI to increase its effectiveness. For instance, AI tools can support threat intelligence sharing across sectors by automating, data tagging, pattern recognition and analysis. A notable example is Bharti Airtel’s AI powered Suspected Scam Alert, which analyses internet traffic, cross-checks with global repositories and leverages its own database of threat actors to flag potential scams in real time.

Across SEA, online scams and fraud are interpreted differently across jurisdictions, creating gaps in enforcement and incident reporting. Developing a shared definition and common principles would provide a much-needed foundation for inter-jurisdiction cooperation. Standardised reporting frameworks and interoperable systems could improve detection and response across borders, enabling a safer regional digital environment.


3. Innovation and safety must go hand in hand.

While scammers increasingly exploit emerging technologies such as AI-generated voices and deepfakes, innovation also enables stronger defences. Tools such as AI-driven content tagging help identify manipulated content as well as other AI powered solutions. For instance, Google SynthID, embeds watermarks into AI generated content, allowing users to identify AI manipulated media and promoting transparency in the digital landscape. More extensively, Google just launched a multitude of features by using AI-powered solutions in defending users against scams such as removing fraudulent websites on their search engine up to AI-powered scams detection in Google messages and Phone whereby it will notify the users upon the emergence of scams threat.

While speed and identification is essential to combat scams, innovation must be guided by responsible AI frameworks. As scams evolve over time, decision-making on current data can risk false positives, biases, or unfair treatment. AI systems should be transparent and explainable, so that impacted individuals and entities can understand the decisions, with avenues for recourse if necessary.

To encourage agility while ensuring accountability and public trust in the digital ecosystem, policy approaches can evolve alongside innovation. Safety ratings and technical guidelines can encourage best practice, while regulatory sandboxes, such as Singapore’s Privacy-Enhancing Technology Sandbox, enable safe experimentation before solutions are scaled.


4. Combating scams requires a whole-of-society approach

Scams cannot be addressed by the governments or platforms alone. Community influencers and actors such as families, grassroots groups, educators, and religious leaders play an important role in raising awareness and building trust. At the systemic level, public and private stakeholders are responsible for securing digital infrastructure and maintaining trust in digital services.

AI can enhance these efforts by localising scam management strategies. A few examples of application of AI include utilising AI-powered translation tools to develop awareness campaigns material into local languages and using AI-driven verification tools to help users distinguish genuine from fraudulent content. More importantly, scam management must be tailored to community context, as one-size-fits-all approaches might not necessarily be suitable as compared to solutions rooted in local needs and context.

Download Report

Download Report

Latest Updates

Latest Updates​

Tag(s):

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Senior Fellow & Founding Executive Director

Dr Ming Tan is Senior Fellow at the Tech for Good Institute; where she served as founding Executive Director of the non-profit focused on research and policy at the intersection of technology, society and the economy in Southeast Asia. She is concurrently a Senior Fellow at and the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide. Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore. Prior to joining the public sector, she was Head of Stewardship of the COMO Group.


Ming also serves on the boards of several private companies, Singapore’s National Volunteer and Philanthropy Centre, Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre. Her current portfolio spans philanthropy, social impact, sustainability and innovation.