AI Verify Foundation: Shaping the AI landscape of tomorrow

In this article, the Tech For Good Institute highlights Singapore's efforts to launch the AI Verify Foundation to shape the future of international AI standards to promote ethical and secure AI implementation.

Artificial intelligence (AI) is rapidly growing and revolutionising industries globally; however, it poses risks and challenges in terms of both ethics and security. That is where the AI Verify Foundation comes into play. They strive to create international guidelines for AI’s responsible, secure, and innovative use. This article explores the foundation’s role in establishing these standards to mitigate risks and potentially steer AI technology towards social values.

Unveiling AI Verify: A Comprehensive Testing Framework for Responsible AI Use

In response to the risks posed by AI, Singapore has officially established the AI Verify Foundation. The foundation aims to utilise the international open-source community to develop AI testing tools and promote ethical and secure AI implementation.

Seven premier members, including IMDA, IBM, Aicadium, Google, Microsoft, Salesforce, and Red Hat, will guide the strategic direction and development of the AI Verify roadmap. It will have over 60 general members, such as Adobe, Meta, DBS, SenseTime, and Singapore Airlines.

IMDA has developed AI Verify to assist organisations in objectively showcasing responsible AI practices through standardised tests. This involves evaluating AI system performance and providing documented evidence that the systems have been developed and deployed with processes to achieve the desired outcomes of these principles. AI Verify comprises two components: the Testing Framework and the Toolkit.  

The AI Verify testing framework evaluates 11 AI ethics principles across five focus areas using a combination of technical tests and process checks. It aligns with international AI governance frameworks such as those established by the European Union, OECD, and Singapore.  

On the other hand, the AI Verify toolkit is a Minimum Viable Product (MVP) designed specifically for enterprise environments. It offers a range of technical tests to assess the fairness, explainability, and overall robustness of AI models. It even allows users to generate customised reports tailored to their AI system and compliance needs.

The AI Verify Testing Framework

Transparency - the ability to responsibly disclose information about the impact of AI systems, allowing those affected to understand the outcomes. Explainability - the ability to evaluate the factors contributing to an AI system's decisions, outcomes, implications, and overall behaviour.

Repeatability/ Reproducibility - the system's ability to consistently perform its required functions under specified conditions for a specific duration and for an independent party to produce the same results given similar inputs.
Safety - AI should not cause harm to humans. Measures should be implemented to mitigate any potential harm.

Security - AI security safeguards AI systems, data, and infrastructure from unauthorised access, modification, destruction, or disruption. Secure AI systems ensure confidentiality, integrity, and availability through protective measures against unauthorised use.

Robustness - AI systems must be resilient against malicious attacks and manipulation, remaining effective despite unexpected input.
Fairness - AI should not lead to unintended and inappropriate discrimination against individuals or groups.

Data Governance - Properly governing the data used in AI systems, including implementing effective data quality practices, lineage, and compliance.
Accountability - AI systems should have clear organisational structures and designated actors responsible for proper functioning.

Human Agency & Oversight - It is crucial to have the ability to implement appropriate oversight and control measures with humans involved at the necessary stages.

Inclusive Growth, Societal & Environmental Well-being - This emphasises the potential of trustworthy AI to contribute to overall growth and prosperity for individuals, society, and the planet.

With the AI Verify framework, comprehensive standards are set for AI ethics and regulation, and companies can have benchmarks in responsible AI implementation. Test outputs from AI Verify also help them identify the gaps and take necessary action, thus fostering stakeholder trust.

Current Limitations and Potential of AI Verify

While AI Verify seems innovative, it is still in its early stages of development. It cannot yet test generative Artificial Intelligence (AI) and Large Language Models (LLMs), define AI ethical standards, or guarantee that any AI system tested will be completely free from risks or biases. The foundation acknowledges that a gap exists in the examination and governance of AI systems. Hence, propelling the discipline forward and launching AI Verify as an open-source platform becomes crucial. This step is essential to gather developers, industry leaders, and researchers to collaborate and help with the growth of AI governance testing and evaluation.

Mrs Josephine Teo, Singapore’s Minister for Communications and Information, recognises these risks and noted that the government cannot do it alone. Mrs Teo said: ‘The private sector with their expertise can participate meaningfully to achieve these goals with us.‘ She added that Singapore’s thinking on AI’s development would steer towards its benefits.

Global experts’ help is critical for the success of this initiative as their reach and scope could help garner support and buy-in. The World Economic Forum, EY, National Institute of Standards and Technology, and the Chinese Academy of Sciences said they will explore AI governance on a bigger scale. This global discussion will focus on how policymakers and researchers can collaboratively address shared AI challenges and will critically evaluate different AI governance strategies in the context of generative AI.

Collaborative Regulation: The Key to Responsible AI Use

AI technology is undeniably transformative and continues to revolutionise industries worldwide – SMEs stand to benefit greatly from its application if executed correctly. According to a study titled AI-Enabled Opportunities and Transformation Challenges for SMEs in the Post-pandemic Era: A Review and Research Agenda, new technologies such as AI can help SMEs improve operations, create business models, build business alliances, develop innovative strategies, reduce costs, and improve productivity.  However, with these advantages come risks, including privacy concerns and social and ethical dilemmas.

These advancements, however, bring complex ethical and security issues to the fore, necessitating a careful, considered response. In this scenario, the need to bring in human factor considerations and adapt the technology in line with local context is key

While governments around the world are rushing down to lay down rules around generative AI, cooperation is key. Here, it is noteworthy to highlight that in the region, members of the 10-member Association of Southeast Asian Nations (ASEAN) are developing an ASEAN Guide on AI Governance and Ethics to establish “guardrails” for this rapidly advancing technology. The guide is set to release in early 2024. In this pursuit, the role of the AI Verify Foundation remains pivotal, boosting AI testing capabilities and assurance to meet the needs of companies and regulators globally. 

Download Report

Download Report

Latest Updates

Latest Updates​

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.


Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.


As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.


In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.


Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.