Algorithms and Agendas: Navigating Election Disinformation and Misinformation in Southeast Asia

In this article, Hamizah Myra and Kelly Foo explore the growing challenges of disinformation and misinformation in elections across Southeast Asia.


By Hamizah Myra and Kelly Foo, Insights Contributor

Social media platforms play an important role in facilitating regional communication and connections. These platforms enable rapid dissemination of news and updates which foster greater public awareness and participation in critical issues. Additionally, they serve as a space for cultural exchange as they allow diverse communities to connect, share experiences, and build understanding across borders.

However, while social media platforms provide significant benefits to societies, they also pose risks, particularly in the form of misinformation and disinformation. The World Economic Forum’s Global Risk Report 2024 highlights the gravity of this issue, ranking misinformation and disinformation as the top global short-term risk, even surpassing extreme weather events. One area where this trend is especially prevalent is during elections, where the spread of false information can undermine public trust and democratic processes.

In particular, disinformation, defined as the deliberate spread of falsehoods, is often employed in elections and increasingly circulated on digital platforms to deceive voters through deepfakes, manipulated media, or fabricated narratives that target candidates or election procedures. Misinformation, on the other hand, is typically unintentional yet equally damaging. This is because well-meaning citizens may unknowingly share incorrect information about candidates, voting processes, or results, especially in the fast-paced environment of election campaigns.


Southeast Asia: Navigating the Challenges of Election Disinformation and Misinformation

Southeast Asia’s (SEA) high social media penetration with higher online engagement times makes the region particularly vulnerable to the risks of election disinformation and misinformation. As the primary channels through which a mobile-first region consumes and shares information, social media platforms have become a key infrastructure that enables both the creation and distribution of false information in the region. Additionally, the rapid advancements in Artificial Intelligence (AI) have enabled more sophisticated disinformation campaigns and subtle psychological manipulation.

The 2024 Indonesian presidential election highlights the significant risks emerging from the intersection of social media, technology, and political campaigns. With a high internet penetration rate of 67% of which 74% are users primarily engaging with social media, it has become a key mode of engagement for candidates to connect with voters. However, these social media platforms were also used to facilitate the spread of manipulated content which garnered widespread attention and likely affected how potential voters viewed candidates. One pertinent example is a widely shared deepfake video featuring an audio clip alleged to be of Surya Paloh, the General Chairman of the National Democratic Party, reprimanding presidential candidate Anies Baswedan. Another widely circulated deepfake video depicted former President Suharto, despite his passing, delivering a speech urging citizens to support the Golkar Party.


Risks of Unchecked Misinformation and Disinformation on SEA Elections

These incidents unfold within a complex regulatory environment, as highlighted by the North Atlantic Treaty Organisation (NATO), which reported significant variation in content moderation approaches across SEA. The range of practices spans from platforms with robust measures, such as fact-checking and content labeling, to those with minimal intervention. Where oversight systems are still developing on certain platforms, there is a higher risk of misleading content spreading and potentially undermining electoral integrity. The consequences are already evident. According to the Bureau of Investigative Journalism, over 8,000 AI-manipulated video advertisements containing altered political content circulated on Facebook in the first half of 2024.

Moreover, research by ISEAS–Yusof Ishak found that such online disinformation campaigns reinforced selective exposure and belief.  The aforementioned AI-generated deepfake videos that circulated during the 2024 Indonesian Presidential Elections are a clear example of how such content can influence voter perceptions. Voters were more likely to accept disinformation that aligned with their partisan views. Consequently, the study warns that online disinformation campaigns have the potential to not only deceive voters but also polarise society further as individuals are more likely to encounter and believe disinformation that supports their preferred candidates while rejecting content that challenges their beliefs.

A similar polarisation effect was seen in Malaysia during the 2022 general election, where TikTok became a platform for inflammatory content as posts promoting an ultra-Malay nationalist agenda were widely shared. Despite government intervention and ByteDance’s removal of thousands of posts, manipulated content remained accessible on the platform long after the election.


Tackling the Threat: Recommendations

As SEA prepares for anticipated general elections in the Philippines and Singapore this year, addressing these challenges becomes increasingly critical. More action is urgently needed to protect the credibility of democratic processes and prevent further polarisation of society. Without timely intervention, elections could remain vulnerable to manipulation and pose significant risks to the social fabric of countries in the region.

For platform companies

Platform companies can further their efforts by increasing the robustness of fact-checking initiatives and demonetising content that spreads disinformation.

Third-party fact-checking is vital during elections to ensure voters make informed decisions with extensive research showing evidence of how warnings based on third-party fact-checking partners have slowed down misinformation. Despite this, social media companies increasingly turn away from these programs in favor of community-driven systems. To address this concerning shift, social media platforms can be incentivised or required to maintain and expand their partnerships with third-party fact-checkers. They also improve coordination between platforms to ensure comprehensive coverage of potentially misleading content. That said, the quality of fact-checking efforts must also be enhanced as the quantity increases. Specifically, an issue raised by fact-checkers in Indonesia and the Philippines is how content is frequently in local languages with deep local nuances which render it difficult for automated systems to detect and flag. Therefore, more investment should be directed towards human resources to fact-check such content, and towards refining existing technology to overcome the obstacle of localisation. Lastly, the integrity of fact-checking efforts must be maintained to avoid becoming partisan or weaponised. This is especially the case in states where political polarisation is high, as it risks exacerbating inflammatory sentiments and backfiring. Independent and transparent funding sources as well as clear guidelines for fact-checking across all levels of society, including political leaders, would help reinforce trust and accuracy in the process.

Demonetising content can also be key to the platforms’ agenda. The pervasiveness of content promulgating disinformation and misinformation signals that platforms’ ad sense model still accords financial value to such content. This results in its promotion through users’ algorithms. Therefore, platforms should reconsider how they monetise content and alter their ad policies to discourage the spread of misinformation and disinformation, thereby reducing its amplification.


For governments

As manipulation techniques in the digital sphere continue to evolve, existing legal frameworks across SEA would benefit from careful examination and potential updates to reflect the digital age and maintain electoral integrity. The current regulatory landscape reveals structural limitations in addressing digital-age complexities, as illustrated by Indonesia’s 2017 election law. While the law prohibits direct attacks between candidates, there was an absence of provisions to address emerging forms of digital content manipulation that fall outside traditional definitions of defamation or abuse. The aforementioned deepfake video of the late President Suharto allegedly expressing support to the Golkar Party exemplifies this gap as it could not be regulated under Indonesia’s 2017 election law despite its misleading nature. Meanwhile, Malaysia has implemented a comprehensive regulatory framework, such as the Sedition Act, the Defamation Act, and the new Cybersecurity Act. However, these policies can be periodically reassessed to ensure they remain effective in addressing emerging digital phenomena and safeguarding the integrity of the electoral process.

Singapore’s recent Elections (Integrity of Online Advertising) (Amendment) Bill presents a promising approach to tackle these issues. The bill directly addresses election-related deepfakes and digital manipulation, prohibiting the publication of altered content that misrepresents candidates’ words or actions. Additionally, the bill holds candidates accountable for knowingly making false declarations. Complementing this effort, the Infocomm Media Development Authority (IMDA) is developing a Code of Practice requiring social media platforms to combat manipulated content proactively.

While these efforts represent progress, they also highlight opportunities for improvement across the region. Regulators can focus on updating legal frameworks to address emerging threats like deepfakes and AI-generated content while ensuring clarity and adaptability. Proactive measures, including collaboration with technology companies to develop detection tools, can also be adopted to strengthen defences. Moreover, early partnerships with platforms and fact-checking organisations can be considered to enhance preventive measures further. For greater impact, ASEAN nations can consider regional cooperation whereby governments can pool resources and expertise to formulate targeted measures, such as developing a shared set of standards and regulations for content moderation, to combat misinformation and disinformation.

 

For citizens

While governments and platforms can address systemic issues and hold bad actors accountable, they cannot entirely eliminate the risks posed by the rapid spread of digital falsehoods. The responsibility also lies with individuals to be aware of the risks they face online. In an increasingly sophisticated digital world, citizens must develop robust digital literacy skills to critically assess information, recognise misinformation, and verify sources.  Citizens must be equipped with the skills to protect their perceptions from being manipulated by election-related disinformation and make informed decisions based on accurate information.

The upcoming elections in Southeast Asia underscore the urgent need to address the pervasive challenge of election disinformation and misinformation. To protect the integrity of democratic processes, it is essential for social media platforms, governments, and citizens to collaborate effectively. Strengthening fact-checking initiatives, modernising legal frameworks, and fostering digital literacy are key steps in mitigating the spread of false information. By taking proactive measures, the region can build resilience against digital manipulation, ensuring that democracy continues to thrive in the digital age.

 

 

About the writer:

Hamizah Myra is an independent research analyst with experience in strategic planning, policy research, and stakeholder engagement across the public, private, and non-profit sectors. She previously served as a Research Analyst at the Tech For Good Institute where she supported research programmes on tech governance and digital platforms in Southeast Asia. She holds a First-Class Honours degree in Global Studies with a minor in Sociology from the National University of Singapore (NUS).

Kelly Foo is currently an undergraduate at the NUS, pursuing a degree in Philosophy, Politics, and Economics (PPE). Previously, she was with the Tech For Good Institute, where she gained in-depth experience in tech policy across Southeast Asia. Her research interests lie at the intersection of policy, society, and technology, focusing on how these fields interact and shape the future.


The views and recommendations expressed in this article are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.

 

Download Report

Download Report

Latest Updates

Latest Updates​

Tag(s):

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.

 

Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.

 

As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.