AI and Robotics: Challenges and Prospects of International Governance and Implications for SEA

This article draws upon insights from a webinar co-hosted by ISEAS – Yusof Ishak Institute and the Tech for Good Institute, titled “Artificial Intelligence: Challenges and Prospects of International Governance, and Implications for Southeast Asia”.

Left to right: Siwage Dharma Negara, Senior Fellow, ISEAS- Yusof Ishak Institute; Professor Simon Chesterman, a David Marshall Professor and Vice Provost (Educational Innovation), National University of Singapore; Shivi Anand, Regional Public Affairs Manager, Grab.

By Shivi Anand, Regional Public Affairs Manager at Grab

Moderator, speaker and discussant:

On 20 April 2023, the Tech for Good Institute and the ISEAS – Yusof Ishak Institute co-hosted a webinar to discuss “Artificial Intelligence: Challenges and Prospects of International Governance, and Implications for Southeast Asia”. Prof Simon Chesterman, David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore, founding Dean of NUS College, Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law, presented key findings from his new book We the Robots? Regulating Artificial Intelligence and the Limits of the Law. Shivi Anand, Regional Public Affairs Manager at Grab, provided industry views as discussant and the session was moderated by Siwage Dharma Negara, Senior Fellow at ISEAS – Yusof Ishak Institute.


Key insights from the panel:

How much transparency is too much transparency?

AI systems are often considered a black box of sorts in terms of how they operate. There are machine learning models in use today which operate on self-generated rules that even the developers themselves don’t fully understand. This obviously raises concerns about whether their risks are fully understood and addressable.

Companies developing and deploying AI systems understand these concerns and are already adopting novel tools such as model cards and system cards to share information about the functioning as well as the intention, impact, risks, and limitations of AI systems. This type of transparency will ultimately promote greater understanding and trust among users and the general public.

However, transparency shouldn’t simply be about providing large quantities of information for the sake of it, as this will most likely induce user fatigue. Consider the analogy of consent fatigue in privacy agreements, which volunteer pages of information about a company’s data practices but are rarely read and absorbed by users who click “I agree” almost instantaneously. It is essential to identify the right balance between meaningful information, that helps the user glean a sufficient understanding of the relevant system, and too much information, that potentially induces fatigue and confusion. Simultaneously, transparency as a part of governance needs to steer clear of creating unnecessary due process for companies.

The case for international standards and alignment

Different countries have always had different approaches to setting the law of the land, depending on their economic, social, cultural, and political context. AI governance and regulation is no different. There is already a wide spectrum of AI governance approaches across the world guided by the values held by different jurisdictions:

    • Singapore and the UK have taken a pro-innovation approach where governance responsibility is principles-based and focused on the outcomes that AI systems produce rather than on the process/technology itself
    • The EU uses a fundamental rights driven and risk-proportionate approach where AI systems are classified into risk levels — banned, high risk, and medium-low risk — with proportionate regulatory requirements applying to each level
    • China goes a step further by governing specific AI systems, e.g. recommendation algorithms, where users are provided unprecedented control over the system’s operation. It also recently introduced a draft law that requires registration and a security assessment of generative AI models by China’s Cyberspace Administration Agency before they can be released to the public.

In Southeast Asia, the expectation is that countries will largely be ‘rule-takers’. To ensure the availability of a solid set of baseline rules, there is a need to establish robust global standards for AI. Global coordination and agreements on core issues like safety, transparency, and fairness are essential to ensure practical governance. Most importantly perhaps, transnational alignment is critical to ensure a minimum global standard in AI safety and to establish a baseline for meaningful governance of the most dangerous AI use cases to ensure that these systems remain under human control. It’s worth calling out however, that rules and governance frameworks must avoid both, over-regulation (that may drive businesses and innovation elsewhere) and under-regulation (that impinges on user safety). Given the relatively early stages of AI development, regulation should not become a race to the bottom in the pursuit of preventing unknown and potentially insignificant harms.

How should companies handle AI governance?

This can be tackled across multiple fronts:

    • Governance needs to be an effort that is suffused throughout the organisation. Adopting Responsible AI principles is perhaps a first step to establish the values of an organisation. This should ideally spark the design of criteria and considerations for ethical AI development at each stage of model training, deployment, monitoring, and review, and relevant stakeholders throughout the organisation should take ownership of the outcomes.
    • Responsible AI development requires a human-centric approach that considers the needs of the user, not just from an efficiency or optimisation perspective, but also from a fairness, transparency, control, and safety perspective.
    • Companies need to be an active participant in public-private-third sector dialogue about AI governance. The ethical development, use, and governance of AI systems is the responsibility of all stakeholders involved – (i) users and civil society, who should exercise agency in understanding how an AI system might impact them and vote with their wallets on what works best for them, (ii) government, by establishing rules and standards to ensure the safe use of AI technologies, (iii) academia and international agencies, in setting global benchmarks, and (iv) companies, who develop and understand these technologies best.
      • Collaboration between companies, government, academia, and users can take the form of regulatory sandboxes and pilots to land on the most effective and efficient governance frameworks

Download Agenda

Download Report

Latest Updates

Latest Updates​

Tag(s):

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.

 

Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.

 

As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.