AI for Development: The Good, The Bad, and The Error 404

In this article, Kwok Foong, a consultant from UNICEF explores the evolution of AI development, encompassing the Good, the Bad and The Error 404 of this revolutionary technology.

By Lim Kwok Foong, Consultant at UNICEF 

Since IBM Deep Blue‘s landmark victory over chess master Gary Kasparov in 1997, AI has expanded its capabilities, evolving into a commercially viable expert system. It harnesses raw computing power and memory capacity to systematically address a range of human-oriented challenges, including tasks such as industrial assembly, image recognition, speech transcription, medical diagnosis, affect recognition, and language translation. This growth in AI’s potential occurred concurrently with advancements in archive storage technology, which facilitated the collection, organisation, and commercialisation of vast amounts of big data.

The turning point arrived in the 2020s when AI achieved two significant milestones. First, the field of generative AI (GAI) substantially extended AI’s abilities beyond explicit, discriminative classification systems to embrace tacit, generative imagination systems. AI no longer relies solely on rule-based reasoning, epitomised by Eliza; instead, it is capable of learning through intricate neural networks that closely resemble the neural connections in the human brain.

Second, the advent of freemium and pay-as-you-go AI services effectively democratised AI for the general public. No longer confined to the realm of ivory towers or corporate research facilities, AI has become a tool accessible and affordable to the common person.

The Good

AI’s impact extends beyond private businesses; it also plays a vital role in international and development organisations like UNICEF. We’re already witnessing improvements in internal operations, courtesy of text-based large language models (LLMs) like ChatGPT 4.0 and Bard. These LLMs enhance operational efficiency by automating routine administrative tasks such as note-taking and text summarisation. This automation is pivotal for programmatic coordination involving a diverse range of stakeholders, both public and private, spanning various geographical locations.

LLMs possess a remarkable capability to generate and extrapolate content, encompassing proposals, concept notes, project reports, and briefing notes—all with the guidance of a well-structured prompt. International organisations often grapple with the ever-changing dynamics of the socio-political landscape while supporting governments. The agility to expedite interventions related to critical issues like malnutrition, healthcare, education, and climate action can significantly impact the most deprived and marginalised populations.

Recognising AI’s potential to accelerate results for children in the region, UNICEF in collaboration with Thinking Machine to develop a catalogue of AI models. This catalogue includes documentation, code, and pre-processed datasets tailored for development interventions under the AI4D initiative. One of the most impactful applications of AI in developmental work is during emergency response. AI-powered data dashboards can automatically generate situational reports using real-time information within the first 72 hours of a disaster, predicting population displacement and facilitating humanitarian relief efforts for displaced children and women.

The Bad

On the flip side, children’s interaction with AI has raised concerns about safety and privacy breaches. Bad actors are misusing generative AI, like Stable Diffusion, to create photo-realistic child sexual abuse materials (CSAM). Authorities have also warned about cases of harassment and extortion, where synthetic voice models imitate loved ones calling for help. Recently, Snapchat’s My AI made headlines when journalists, posing as middle school children, received inappropriate advice on party planning and adult relationships. Error-laden AI chatbots can unintentionally form emotional bonds with children, raising broader concerns about data protection against commercial and political exploitation as children share personal information.


The Error 404

AI can do considerable harm in the hands of bad actors. But this is not the only threat facing the dilemma of applying AI to developmental work; a significant barrier remains. Not all Southeast Asian regions equally benefit from AI’s advancements. The World Bank’s Digital Adoption Index reveals disparities in digital transformation among ASEAN countries, with Lao PDR, Myanmar, and Cambodia lagging behind in internet penetration and fixed broadband subscriptions..

In contrast, larger economies like Singapore, Thailand, Indonesia, and Vietnam can capitalise on market competition to drive down fixed broadband prices. Smaller economies, coincidentally those lagging in the Digital Adoption Index, face higher entry costs for stable internet connectivity. 

Internet access is a fundamental requirement for civic and economic participation in the 21st century. However, countries that most urgently need digital mobility also struggle with low internet penetration and limited fixed broadband subscriptions.

Generative AI has liberated the workforce from administrative language tasks, which account for 62% of total work time, as per an Accenture report based on U.S. labour statistics. This liberation extends to tasks such as customer service, advisory roles, basic creative work for product and social media design, simple coding, and banking process automation, affecting 40% of working hours across industries.

These AI-augmentable tasks often serve as entry points for fresh graduates and entry-level employees, offering opportunities for upskilling and portfolio building. However, as AI becomes increasingly integral to businesses, a significant reskilling gap emerges, highlighting the inability of the traditional education system to keep pace with the 21st-century job market.

Generative AI’s expansion in the international development field faces a fundamental concern: bias propagation. Machine learning models inherently mirror the biases of their creators and the societal context they emerge from. Generative AI exacerbates this issue by actively reproducing and disseminating biases in texts and artworks.

The root of this problem lies in training datasets. In her book “The Atlas of AI,” Kate Crawford highlights how biases are embedded in datasets like ImageNet. For example, ImageNet initially categorised images using lexicons from WordNet but also contained offensive labels that reinforced societal prejudices. Similarly, databases like UTKFace limited gender to male or female and race to categories like White, Black, Asian, Indian, and Others.

Despite efforts, such as ImageNet‘s removal of close to 56% of offensive categories, the impact of a decade’s worth of biassed training data persists, influencing AI models used globally. With just 12 institutions in the cultural majority producing 50% of training datasets worldwide, it is unsurprising that AI models continue to reflect viewpoints skewed toward English, American, and white male perspectives.

AI’s environmental impact is a hidden concern. AI’s physical infrastructure resides in massive data centres—comprising GPU-lined servers, intricate cables, and an array of routers and switches, enabling seamless connectivity. In the United States alone, these centres consume 2% of the nation’s electricity, equivalent to 81 billion kWh annually, with 40% used for cooling. Despite transitioning to renewable energy, global AI service demand outpaces green initiatives.

Research from the University of Massachusetts Amherst reveals alarming statistics. Training one Large Language Model (LLM) emits 284,000 kg CO2, five times the lifetime emissions of an average car. For example, ChatGPT-3’s operation consumed 1287 MWh of electricity, resulting in 502 tonnes of CO2 emissions. Even seemingly minor actions like asking a question in Bard significantly contribute to the environmental footprint due to a fivefold increase in computing power and subsequent carbon emissions in LLM-enhanced queries. The proliferation of AI technologies comes with a significant carbon cost.

AI, especially generative AI, holds immense promise for international development and public work. However, it also presents a multitude of systemic flaws and potential hazards that directly contradict the principles of fairness, equity, and justice upheld by development practitioners. While there are numerous opportunities for innovation and positive impact, there are also considerable risks, especially for the most vulnerable and marginalised populations.

Addressing the question of “should we or should we not?” in relation to AI interventions must involve a meticulous examination of their impact, ethical considerations, and the human cost involved. The world is gradually establishing the necessary safeguards for responsible AI utilisation. UNICEF’s Policy Guidance on AI for Children is a good place to start, but we need more. At this pivotal moment, perhaps we can draw inspiration from the scientific approach—conducting experiments with care, mindfulness, and noble intentions.

About the writer 

Kwok Foong is interested in the cross-section of technology and society. A development worker at heart, he is experienced working with young people in community-based social and environmental programmes. As Digital Technologies Consultant, he supports UNICEF in the adoption of digital solutions to accelerate programme results for children and young people. 

The views and recommendations expressed in this article are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.

Download Agenda

Download Report

Latest Updates

Latest Updates​

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.


Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.


As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.


In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.


Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.