Leveraging AI for Good: Harnessing AI to Combat Cybercrime

During the Global Forum on Cyber Expertise’s (GFCE) Southeast Asia (SEA) Regional Meeting 2024, the Tech for Good Institute participated in a panel discussion on the constructive uses of technology. Our Programme Manager, Keith Detros, moderated the panel titled “AI for Good: Leveraging AI to Combat Cybercrime,” featuring experts from various sectors. This panel was part of the Singapore International Cyber Week (SICW), organised by the Cyber Security Agency of Singapore (CSA).

From left to right: Keith Detros, Programme Manager, Tech For Good Institute; Liyana Aman, Junior Fellow, Global Research Network; Nina Bual, Co-founder, Cyberlite; Lee Pei Ling, Head, Cyber Strategy and Capabilities Development, Cybercrime Directorate, INTERPOL

The advent of Artificial Intelligence (AI) spells remarkable economic growth. Globally, AI is projected to contribute up to $15.7 trillion to the economy in 2030, fuelling significant boosts in productivity and innovation throughout the world. Regionally, Southeast Asia (SEA) is expected to experience economic growth valued at up to $950 million due to the usage of AI by 2030, making up 13% of the gross regional product. Undoubtedly, SEA will reap tremendous benefits from the rising wave of AI — be it through new opportunities in emerging markets, enhanced productivity, or improved decision-making capabilities. In Indonesia alone, workers in firms that adopt AI for production functions have witnessed a 49% increase in productivity. Such benefits will only grow in the future as ongoing advancements in AI continue to drive innovation and create new avenues for growth in SEA.

However, harnessing AI for economic growth has, at the same time, engendered negative repercussions. AI has significantly enabled cybercrime by providing malicious actors with tools that automate, streamline and expand their criminal activities online. An example is bad actors misusing generative AI to create harmful content, such as child sexual abuse materials (CSAM), deepfakes to commit identity fraud, and highly specific and threatening messages used for cyberbullying. The Global Anti-Scam Alliance’s (GASA) 2024 Asia Scam Report listed AI-driven scams as an alarming trend, and cited how Asia bore the brunt of the global scam burden as an estimated US$688.42 billion was lost to scams, demonstrating the devastating impact of AI-enabled fraudulent activities in the region as this figure makes up more than 50% of the global scam burden.

In view of these challenges, the Tech for Good Institute moderated a panel at the Global Forum on Cyber Expertise’s (GCFE) Southeast Asia (SEA) Regional Meeting, which occurred during Singapore International Cyber Week (SICW). The panel, featuring experts across different sectors, explored how AI can be leveraged to combat cybercrime. The insights from the panel can be summarised into a ‘Good, Better, Best’ approach — the need for good regulatory frameworks for ethical use of AI, better coordination across stakeholders in a whole of society approach and the sharing of best practices for capability development.

 

Moderator and Panellists

  • Keith Detros, Programme Manager, Tech For Good Institute
  • Lee Pei Ling, Head, Cyber Strategy and Capabilities Development, Cybercrime Directorate, INTERPOL
  • Liyana Azman, Junior Fellow, Global Research Network
  • Nina Bual, Co-Founder, Cyberlite

 

Key Takeaways:

1. AI as a double-edged sword — fighting tech with tech

With AI, cybercrime is becoming easier to execute as barriers to entry for conducting deceptive and manipulative activities are lowered; traditionally, content used for scams and phishing typically takes 16 hours to create, but nowadays with the utilisation of AI tools, it only takes 5 minutes. This is exacerbated by the increasing sophistication of generative AI tools and Large Language Models (LLMs), which allows bad actors to produce localised content that narrowly targets certain demographics. This is in addition to the usage of AI in other areas, such as for reconnaissance to identify potential targets and to understand the structural weaknesses of organisations.

In order to confront AI-enabled cybercrime, we must also employ AI for our defence initiatives. AI can be used in a myriad of ways — it can triage incoming threats, execute counter-reconnaissance, scrutinise past data sets to provide optimised decision-making and act as a cybersecurity advisor, conduct sentiment analysis to detect when vulnerable individuals are being cyberbullied, and blur potentially harmful images. Additionally, LLMs can create honeypots with more sophisticated, dynamic, and human-like interactions that lure cyber criminals to reveal their intentions, operational insights, and structures.

 

2. Disparities in countries’ regulatory approaches to combat cybercrime

In their fight against cybercrime, countries adopt diverse approaches to cybersecurity and AI, reflecting their unique priorities and challenges. In China, AI and cybersecurity are deeply intertwined within legal frameworks, with regulations emphasising compliance with cybersecurity standards from the outset. This integration is a cornerstone of the nation’s cybersecurity agenda. In contrast, Malaysia treats cybersecurity and AI as separate entities, with conversation around them distinctly separate. This distinction is not indicative of a lax attitude but rather stems from differing national agendas and socio-political considerations. These differences are also reflected in countries’ regulatory regimes. For example, Singapore enforces rigorous testing requirements for AI technologies, exemplified in the Infocomm Media Development Authority’s (IMDA) establishment of the AI Verify Foundation to develop AI testing tools, while China focuses on the implementation of bottleneck measures, such as mandating the registration of algorithms, to manage risks.

 

3. The need for shared responsibility to tackle AI-driven cybercrime

While the utilisation of AI is a crucial component of eradicating crime, the need for shared responsibility across all sectors is similarly pivotal. In tackling AI-driven cybercrime, it is critical to assume a whole-of-society approach, to engage various stakeholders in cross-sector and cross-border collaborations, so as to effectively eliminate cybercrime.  For instance, initiatives like the EdTech Masterplan 2030 aim to train teachers as part of a broader educational strategy, empowering stakeholders to make AI tools more accessible to communities. Community engagement is also essential to correct misunderstandings about technology — increased engagement and transparency on how AI is being used for good can promote greater public buy-in on AI. Additionally, creating a public portal can help the community understand ongoing collaborations and prevailing regulatory standards. Partnerships with the private sector are equally important, as the private sector is at the forefront of tech expertise and can drive innovation and resource sharing, further enhancing efforts to combat cyber threats. 

Download Report

Download Report

Latest Updates

Latest Updates​