From left to right: Dr. Simon Chesterman, Senior Director, AI Governance AI Singapore; Rachel Teo, Head of Public for SEA, Sustainability (APAC) Google; Dr. Ming Tan, Founding Executive Director, Tech for Good Institute; Lee Wan Sie, Director, Data-Driven Tech Infocomm Media Development Authority
In the dynamic landscape of Artificial Intelligence (AI), where innovation unfolds at an unprecedented pace, the role of governance and ethics is paramount in shaping the responsible development and deployment of AI technologies. The panel discussion gives insights into how to foster a responsible AI ecosystem that leverages technological advancements while prioritising ethical considerations and governance.
Moderator and panelists:
- Dr. Ming Tan, Founding Executive Director, Tech for Good Institute
- Dr. Simon Chesterman, David Marshall Professor and Vice Provost (Educational Innovation) at NUS and Dean of NUS College, Senior Director of AI Governance at AI Singapore, Editor of the Asian Journal of International Law
- Rachel Teo, Head of Public Policy for SEA, Sustainability (APAC), Google
- Wan Sie Lee, Director for Data-Driven Tech, Infocomm Media Development Authority (IMDA)
Key Takeaways:
- Trust: The Core of the Digital Ecosystem
The panel discussion underscored a collective recognition of the pivotal role trust plays in seamlessly and responsibly integrating AI technologies into the digital ecosystem. Serving as a foundational element, trust bolsters user confidence, ethical decision-making, regulatory compliance, and societal acceptance. Consequently, it plays a central role in amplifying the overall positive impact of AI on individuals and communities. This perspective aligns with the insights from the Tech For Good Institute’s Digital Financial Services research study, emphasising the significance of trust in predicting consumers’ adoption of digital financial services. It highlights trust as a core element in establishing a robust foundation for the digital ecosystem.
- Digital and algorithmic literacy essential in navigating the AI-driven landscape
Among the 460 million internet users in the SEA region, a significant 100 million have come online in the past three years. Termed “AI-natives,” these individuals experience an online environment shaped by AI technologies like chatbots, recommendation engines, and seamlessly integrated machine translation. As a result, basic digital literacy now requires proficiency in navigating this AI-driven landscape.
Algorithmic literacy, involving awareness of the existence, application, and implications of algorithms in digital platforms, is pivotal. This literacy empowers users to maintain control over their decision-making autonomy, encompassing the ability to recognise AI-generated content, understand biases, and apply critical thinking when engaging with AI. Ultimately, this approach ensures the preservation of autonomy and responsibility in decision-making.
- Collaborative effort key to unlocking the potential of AI
Southeast Asia emerges as a region marked by rapid change, diversity, and inherent excitement. Adapting technology and governance to this dynamic landscape becomes imperative, extending beyond AI to encompass technology and tech-enabled business models as a whole. A concerted, collaborative effort is essential, involving regional bodies like ASEAN, national governments, industry stakeholders, and communities. This collective endeavor is crucial for safely and inclusively unlocking the potential of AI. Coordinated agreements within Southeast Asia on fundamental issues such as safety, transparency, and fairness are vital for effective governance. More importantly, transnational alignment, exemplified by agreements like the DEFA agreement, becomes critical. This alignment is necessary to establish a minimum global standard in AI safety and create a foundational framework for the meaningful governance of the most perilous AI use cases, ensuring that these systems remain under human control.
Watch the full panel discussion here: