By Abhishek Roy, UX Researcher at Google
Online scams in APAC have escalated from a persistent nuisance to a burgeoning regional crisis, inflicting staggering financial losses (approximately US$688 billion in 2024) and causing profound psychological distress. Despite widespread awareness efforts and growing consumer confidence in their own scam savviness, victimisation rates remain alarmingly high. For example, a 2022 Visa study found that while 51% of Australians considered themselves knowledgeable about identifying scams, 30% still reported falling victim. Similarly, in Singapore, despite extensive government anti-scam campaigns achieving over 70% awareness level, authorities have identified an awareness–action gap. This disconnect between perceived awareness, confidence, and actual resilience presents a systemic challenge that must be addressed.
Beyond Awareness: Understanding the Human Factors of Scam Risk
Why do people remain susceptible to scams despite being aware of them? While specific forewarnings or prior familiarity with the mechanics of certain scams can offer some protection, broad and general awareness often crumbles under the sophisticated pressures applied by fraudsters. Scam susceptibility is a complex issue, shaped by a range of psychological, socio-demographic, and situational factors. However, three key dynamics are emerging:
- Scammers employ manipulation tactics: Scammers deliberately trigger emotional hot states, i.e. sudden surges of intense emotion such as fear, excitement, or urgency, to hijack rational thinking. When gripped by strong emotions, our ability to calmly assess a situation, think analytically, and recall protective knowledge is diminished. As a result, we become more likely to make snap judgments based on superficial cues, without properly weighing the risks.
- Overconfidence increases susceptibility: Another predictor of victimisation is overconfidence, where individuals believe they are too smart to be scammed. This sense of perceived invulnerability can lead them to lower their guard, skip crucial verification steps, and ignore red flags.
- The emergence of fraud fatigue: Maintaining constant vigilance requires considerable mental effort. Emerging evidence suggests that the sheer volume and relentless sophistication of scam attempts can lead to fraud fatigue. A recent poll by RBC in Canada found that 86% of respondents found it harder to recognise scams over time, while two-thirds (65%) reported feeling tired of always being on alert. Consequently, one-third (33%) admitted to letting down their guard. Similar studies are needed in APAC to better understand this growing phenomenon.
Simple awareness alone may not adequately protect consumers. This is especially true when emotions are exploited, confidence is misplaced, or individuals are experiencing mental fatigue and situational stress.
From Passive Information Sharing to Active Psychological Shielding
A strategic shift in approach is needed to meaningfully change behaviour. The Inoculation Theory, pioneered by William J. McGuire in 1961, suggests that—much like a medical vaccine—we can build psychological immunity. By preemptively exposing individuals to weakened forms of manipulation tactics in a safe, controlled environment and teaching them how to refute these tactics, we can help develop “mental antibodies” that enhance their ability to recognise and resist real-world scams.
To test this approach, a game prototype called ShieldUp was developed based on Inoculation Theory. In a pilot randomised controlled trial (n=3,000) conducted in India, players showed significant and sustained improvement in identifying scam scenarios for up to 21 days post-intervention. They outperformed users who had instead watched 10–15 minutes of awareness videos and online safety tips. Importantly, the improved ability to detect scams did not lead to lasting or unwarranted distrust of legitimate online interactions.
These findings highlight that active, experiential learning focused on the psychology of manipulation builds more robust and discerning resilience than passive warnings alone.
Actionable Recommendations for Building True Scam Resilience
The following recommendations outline evidence-based and user-centric approaches that align with emerging behavioral insights:
Traditional digital literacy programmes should evolve to include interactive, simulation-based learning tools—such as ShieldUp—that train users to recognise and resist manipulation tactics. Moving beyond passive information sharing, this approach cultivates adaptable, long-term defences against evolving scam narratives by engaging users in experiential learning.
Digital platforms and financial services can embed contextual, interactive micro-interventions at high-risk decision points. Instead of generic warnings like “Are you sure?”, these interventions could briefly highlight the psychological tactic at play—for example: “Scammers often manufacture urgency to pressure quick action. Could that be happening here?” By delivering just-in-time education tied to user behaviour, platforms can reinforce scam recognition skills precisely when they are needed most.
While broad-based awareness has limitations, targeted campaigns anchored in behavioural science can yield better outcomes. Campaigns should demystify why certain scams work by spotlighting the emotional triggers or cognitive biases they exploit. These campaigns should be localised and data-driven, tailored to the vulnerabilities of specific demographic groups or communities.
Building resilience into systemic responses
The rising tide of online scams in APAC highlights a critical gap between awareness and real-world resilience. Addressing this requires more than education—it calls for proactive, evidence-based interventions grounded in behavioural science, such as Inoculation Theory. At the same time, efforts to destigmatise victimisation, streamline reporting, and enhance enforcement are essential to building trust and accountability.
Strengthening digital resilience is a shared responsibility. By embedding behavioural insights into digital literacy programmes, platform design, and policy frameworks, stakeholders can reinforce consumer defences and cultivate a safer, more trusted digital ecosystem across the region.
About the writer
Abhishek leads user research efforts on online deception and manipulation in Google’s Trust & Safety team. In this role, he oversees quantitative and qualitative research studies focused on gaining insights into user behavior, perceptions, and preferences. These insights are used to drive changes that promote user protection across Google products. Abhishek has worked at Google for over 11 years and has previously led teams conducting user research and policy enforcement efforts on products like Google News and Google Search.
The views and recommendations expressed in this article published on June 2025 are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.