Artificial intelligence (AI) is rapidly growing and revolutionising industries globally; however, it poses risks and challenges in terms of both ethics and security. That is where the AI Verify Foundation comes into play. They strive to create international guidelines for AI’s responsible, secure, and innovative use. This article explores the foundation’s role in establishing these standards to mitigate risks and potentially steer AI technology towards social values.
Unveiling AI Verify: A Comprehensive Testing Framework for Responsible AI Use
In response to the risks posed by AI, Singapore has officially established the AI Verify Foundation. The foundation aims to utilise the international open-source community to develop AI testing tools and promote ethical and secure AI implementation.
Seven premier members, including IMDA, IBM, Aicadium, Google, Microsoft, Salesforce, and Red Hat, will guide the strategic direction and development of the AI Verify roadmap. It will have over 60 general members, such as Adobe, Meta, DBS, SenseTime, and Singapore Airlines.
IMDA has developed AI Verify to assist organisations in objectively showcasing responsible AI practices through standardised tests. This involves evaluating AI system performance and providing documented evidence that the systems have been developed and deployed with processes to achieve the desired outcomes of these principles. AI Verify comprises two components: the Testing Framework and the Toolkit.
The AI Verify testing framework evaluates 11 AI ethics principles across five focus areas using a combination of technical tests and process checks. It aligns with international AI governance frameworks such as those established by the European Union, OECD, and Singapore.
On the other hand, the AI Verify toolkit is a Minimum Viable Product (MVP) designed specifically for enterprise environments. It offers a range of technical tests to assess the fairness, explainability, and overall robustness of AI models. It even allows users to generate customised reports tailored to their AI system and compliance needs.
The AI Verify Testing Framework
TRANSPARENCY ON THE USE OF AI AND AI SYSTEMS | UNDERSTANDING HOW AI MODELS REACH DECISION | SAFETY & RESILIENCE OF AN AI SYSTEM | FAIRNESS / NO UNINTENDED DISCRIMINATION | MANAGEMENT AND OVERSIGHT OF AN AI SYSTEM |
---|---|---|---|---|
Transparency - the ability to responsibly disclose information about the impact of AI systems, allowing those affected to understand the outcomes. |
Explainability - the ability to evaluate the factors contributing to an AI system's decisions, outcomes, implications, and overall behaviour. Repeatability/ Reproducibility - the system's ability to consistently perform its required functions under specified conditions for a specific duration and for an independent party to produce the same results given similar inputs. |
Safety - AI should not cause harm to humans. Measures should be implemented to mitigate any potential harm. Security - AI security safeguards AI systems, data, and infrastructure from unauthorised access, modification, destruction, or disruption. Secure AI systems ensure confidentiality, integrity, and availability through protective measures against unauthorised use. Robustness - AI systems must be resilient against malicious attacks and manipulation, remaining effective despite unexpected input. |
Fairness - AI should not lead to unintended and inappropriate discrimination against individuals or groups. Data Governance - Properly governing the data used in AI systems, including implementing effective data quality practices, lineage, and compliance. |
Accountability - AI systems should have clear organisational structures and designated actors responsible for proper functioning. Human Agency & Oversight - It is crucial to have the ability to implement appropriate oversight and control measures with humans involved at the necessary stages. Inclusive Growth, Societal & Environmental Well-being - This emphasises the potential of trustworthy AI to contribute to overall growth and prosperity for individuals, society, and the planet. |
With the AI Verify framework, comprehensive standards are set for AI ethics and regulation, and companies can have benchmarks in responsible AI implementation. Test outputs from AI Verify also help them identify the gaps and take necessary action, thus fostering stakeholder trust.
Current Limitations and Potential of AI Verify
While AI Verify seems innovative, it is still in its early stages of development. It cannot yet test generative Artificial Intelligence (AI) and Large Language Models (LLMs), define AI ethical standards, or guarantee that any AI system tested will be completely free from risks or biases. The foundation acknowledges that a gap exists in the examination and governance of AI systems. Hence, propelling the discipline forward and launching AI Verify as an open-source platform becomes crucial. This step is essential to gather developers, industry leaders, and researchers to collaborate and help with the growth of AI governance testing and evaluation.
Mrs Josephine Teo, Singapore’s Minister for Communications and Information, recognises these risks and noted that the government cannot do it alone. Mrs Teo said: ‘The private sector with their expertise can participate meaningfully to achieve these goals with us.‘ She added that Singapore’s thinking on AI’s development would steer towards its benefits.
Global experts’ help is critical for the success of this initiative as their reach and scope could help garner support and buy-in. The World Economic Forum, EY, National Institute of Standards and Technology, and the Chinese Academy of Sciences said they will explore AI governance on a bigger scale. This global discussion will focus on how policymakers and researchers can collaboratively address shared AI challenges and will critically evaluate different AI governance strategies in the context of generative AI.
Collaborative Regulation: The Key to Responsible AI Use
AI technology is undeniably transformative and continues to revolutionise industries worldwide – SMEs stand to benefit greatly from its application if executed correctly. According to a study titled “AI-Enabled Opportunities and Transformation Challenges for SMEs in the Post-pandemic Era: A Review and Research Agenda,“ new technologies such as AI can help SMEs improve operations, create business models, build business alliances, develop innovative strategies, reduce costs, and improve productivity. However, with these advantages come risks, including privacy concerns and social and ethical dilemmas.
These advancements, however, bring complex ethical and security issues to the fore, necessitating a careful, considered response. In this scenario, the need to bring in human factor considerations and adapt the technology in line with local context is key.
While governments around the world are rushing down to lay down rules around generative AI, cooperation is key. Here, it is noteworthy to highlight that in the region, members of the 10-member Association of Southeast Asian Nations (ASEAN) are developing an ASEAN Guide on AI Governance and Ethics to establish “guardrails” for this rapidly advancing technology. The guide is set to release in early 2024. In this pursuit, the role of the AI Verify Foundation remains pivotal, boosting AI testing capabilities and assurance to meet the needs of companies and regulators globally.