AI on Trial: Navigating Explainability and Transparency

In this article, Dr. Eric J. W. Orlowski delves into the critical elements of AI governance: explainability and transparency. Drawing insights from AI Singapore’s report on the topic, he explores how these twin concepts can help ensure AI systems are accountable and comprehensible in an increasingly automated world.

By: Dr. Eric J. W. Orlowski, Research Fellow in AI Governance, AI Singapore, The National University of Singapore

The Kafkaesque Dilemma

“Someone must have been telling lies about Josef K., he knew he had done nothing wrong but, one morning, he was arrested.”
Opening line from Franz Kafka’s The Trial, 1924.

Franz Kafka masterfully illustrated the absurdity of bureaucratic systems in The Trial. The story of Josef K., a bank manager thrust into an incomprehensible legal process without ever knowing why, echoes through modern discussions on artificial intelligence (AI), and not just because it happens to be Kafka’s centenary. Josef K.’s experience—his entanglement in a labyrinthine system, and ultimate execution without complaint—resonates with contemporary concerns about opaque decision-making in AI systems.

Kafka’s exploration of powerlessness and confusion under a rigid system serves as a metaphor for the potential dangers posed by AI’s growing role in decision-making. As AI systems increasingly influence our lives, the need for transparency and explainability in these systems becomes vital. Surveillance scholar David Lyon highlighted this concern in 2010, noting that with a “hard-wired Kafka effect,” individuals may face life-altering decisions, such as denial of credit or boarding passes, without understanding the reason or knowing they were under scrutiny in the first place.

In a world shaped by AI, Lyon’s insights seem more prescient than ever. As AI continues to make automated decisions, questions of transparency and explainability have surged to the forefront of public discourse. Without a clear framework, we risk recreating Kafka’s world—where the system governs us without accountability or recourse.


Transparency vs. Explainability: What’s the Difference?

Modern AI systems, particularly those based on deep learning algorithms, present unique challenges to transparency and explainability. The sheer complexity of these systems often makes it difficult to provide users with meaningful insights. Offering complete technical transparency might overwhelm users with information, most of which is incomprehensible to those without a deep technical background.

At its core, transparency is about accountability. Developers must ensure that users are aware when interacting with AI systems, understand which data the AI uses, and know the system’s limitations. However, it’s important to recognise that transparency to developers or regulators differs from transparency to the general public. Many lack the technical expertise to interpret the inner workings of AI. This tension between a technocratic elite and a lay public adds complexity to discussions of AI governance, as public policy must address this divide to be effective.

But as Lyon suggests, transparency alone solves only half the problem. Merely knowing that a system is making decisions about us, or having a vague idea of what those decisions might be, offers little comfort. Without specific, understandable explanations, transparency can become an empty gesture, one that sidesteps accountability rather than addressing it.


The Limits of Explainability: A Technical and Ethical Dilemma

Explainability, while necessary, faces inherent challenges. In certain types of AI, such as neural networks, decisions are often too complex to be fully explained in a way that makes sense to humans. While tools like SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-agnostic Explanations) attempt to open the black box, they remain imperfect. Their explanations can still leave key questions unanswered, raising doubts about whether AI decisions can ever be fully understood.

This predicament forces us to confront trade-offs. In high-stakes situations—such as medical diagnostics or legal judgments—the need for explainability is paramount. However, increased explainability often comes at the cost of accuracy. Simpler, more interpretable models may not be as precise. This presents both a technical and ethical challenge: Should we prioritise human understanding over AI performance?

This dilemma underscores the importance of finding a balanced regulatory framework, what  we Swedes would call ‘lagom’—regulation that is just right; neither too restrictive nor too permissive. The European Union’s AI Act is one of the first major attempts to strike this balance. By adopting a risk-based approach, the AI Act mandates greater transparency and explainability for high-risk AI systems, while allowing more leniency in lower-stakes contexts.


Rethinking Our Relationship with AI and Power

It is clear that AI transparency and explainability are not purely technical issues. While the technical aspects are challenging, the real test lies in how society navigates the ethical and social implications. As AI continues to shape the future, we must not only ask how these systems work, but also for whom they work and to what end.

If we fail to address these deeper questions, we risk following in the footsteps of Josef K., trapped in an automated system that operates without clear purpose or accountability. To avoid this, we must strive for AI systems that are both transparent and explainable, not just in technical terms, but in ways that serve the broader interests of society. Only then can we hope to prevent the Kafkaesque scenario from becoming our reality.



About the writer

Dr. Eric J. W. Orlowski is a Research Fellow with AI Singapore’s Governance team, focusing on how AI adoption strategies are translated from theory to practice. He holds a Ph.D. in Social & Cultural Anthropology from University College London and has been researching the social impacts of emerging technologies since 2018.


The views and recommendations expressed in this article are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.

Download Report

Download Report

Latest Updates

Latest Updates​

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.

 

Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.

 

As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.