Left to right: Siwage Dharma Negara, Senior Fellow, ISEAS- Yusof Ishak Institute; Professor Simon Chesterman, a David Marshall Professor and Vice Provost (Educational Innovation), National University of Singapore; Shivi Anand, Regional Public Affairs Manager, Grab.
By Shivi Anand, Regional Public Affairs Manager at Grab
Moderator, speaker and discussant:
- Siwage Dharma Negara, Senior Fellow, ISEAS- Yusof Ishak Institute
- Professor Simon Chesterman, a David Marshall Professor and Vice Provost (Educational Innovation), National University of Singapore
- Shivi Anand, Regional Public Affairs Manager, Grab
On 20 April 2023, the Tech for Good Institute and the ISEAS – Yusof Ishak Institute co-hosted a webinar to discuss “Artificial Intelligence: Challenges and Prospects of International Governance, and Implications for Southeast Asia”. Prof Simon Chesterman, David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore, founding Dean of NUS College, Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law, presented key findings from his new book We the Robots? Regulating Artificial Intelligence and the Limits of the Law. Shivi Anand, Regional Public Affairs Manager at Grab, provided industry views as discussant and the session was moderated by Siwage Dharma Negara, Senior Fellow at ISEAS – Yusof Ishak Institute.
Key insights from the panel:
• How much transparency is too much transparency?
AI systems are often considered a black box of sorts in terms of how they operate. There are machine learning models in use today which operate on self-generated rules that even the developers themselves don’t fully understand. This obviously raises concerns about whether their risks are fully understood and addressable.
Companies developing and deploying AI systems understand these concerns and are already adopting novel tools such as model cards and system cards to share information about the functioning as well as the intention, impact, risks, and limitations of AI systems. This type of transparency will ultimately promote greater understanding and trust among users and the general public.
However, transparency shouldn’t simply be about providing large quantities of information for the sake of it, as this will most likely induce user fatigue. Consider the analogy of consent fatigue in privacy agreements, which volunteer pages of information about a company’s data practices but are rarely read and absorbed by users who click “I agree” almost instantaneously. It is essential to identify the right balance between meaningful information, that helps the user glean a sufficient understanding of the relevant system, and too much information, that potentially induces fatigue and confusion. Simultaneously, transparency as a part of governance needs to steer clear of creating unnecessary due process for companies.
• The case for international standards and alignment
Different countries have always had different approaches to setting the law of the land, depending on their economic, social, cultural, and political context. AI governance and regulation is no different. There is already a wide spectrum of AI governance approaches across the world guided by the values held by different jurisdictions:
-
- Singapore and the UK have taken a pro-innovation approach where governance responsibility is principles-based and focused on the outcomes that AI systems produce rather than on the process/technology itself
- The EU uses a fundamental rights driven and risk-proportionate approach where AI systems are classified into risk levels — banned, high risk, and medium-low risk — with proportionate regulatory requirements applying to each level
- China goes a step further by governing specific AI systems, e.g. recommendation algorithms, where users are provided unprecedented control over the system’s operation. It also recently introduced a draft law that requires registration and a security assessment of generative AI models by China’s Cyberspace Administration Agency before they can be released to the public.
In Southeast Asia, the expectation is that countries will largely be ‘rule-takers’. To ensure the availability of a solid set of baseline rules, there is a need to establish robust global standards for AI. Global coordination and agreements on core issues like safety, transparency, and fairness are essential to ensure practical governance. Most importantly perhaps, transnational alignment is critical to ensure a minimum global standard in AI safety and to establish a baseline for meaningful governance of the most dangerous AI use cases to ensure that these systems remain under human control. It’s worth calling out however, that rules and governance frameworks must avoid both, over-regulation (that may drive businesses and innovation elsewhere) and under-regulation (that impinges on user safety). Given the relatively early stages of AI development, regulation should not become a race to the bottom in the pursuit of preventing unknown and potentially insignificant harms.
• How should companies handle AI governance?
This can be tackled across multiple fronts:
-
- Governance needs to be an effort that is suffused throughout the organisation. Adopting Responsible AI principles is perhaps a first step to establish the values of an organisation. This should ideally spark the design of criteria and considerations for ethical AI development at each stage of model training, deployment, monitoring, and review, and relevant stakeholders throughout the organisation should take ownership of the outcomes.
- Responsible AI development requires a human-centric approach that considers the needs of the user, not just from an efficiency or optimisation perspective, but also from a fairness, transparency, control, and safety perspective.
- Companies need to be an active participant in public-private-third sector dialogue about AI governance. The ethical development, use, and governance of AI systems is the responsibility of all stakeholders involved – (i) users and civil society, who should exercise agency in understanding how an AI system might impact them and vote with their wallets on what works best for them, (ii) government, by establishing rules and standards to ensure the safe use of AI technologies, (iii) academia and international agencies, in setting global benchmarks, and (iv) companies, who develop and understand these technologies best.
- Collaboration between companies, government, academia, and users can take the form of regulatory sandboxes and pilots to land on the most effective and efficient governance frameworks