By: Dr. Eric J. W. Orlowski, Research Fellow in AI Governance, AI Singapore, The National University of Singapore
The Kafkaesque Dilemma
“Someone must have been telling lies about Josef K., he knew he had done nothing wrong but, one morning, he was arrested.”
– Opening line from Franz Kafka’s The Trial, 1924.
Franz Kafka masterfully illustrated the absurdity of bureaucratic systems in The Trial. The story of Josef K., a bank manager thrust into an incomprehensible legal process without ever knowing why, echoes through modern discussions on artificial intelligence (AI), and not just because it happens to be Kafka’s centenary. Josef K.’s experience—his entanglement in a labyrinthine system, and ultimate execution without complaint—resonates with contemporary concerns about opaque decision-making in AI systems.
Kafka’s exploration of powerlessness and confusion under a rigid system serves as a metaphor for the potential dangers posed by AI’s growing role in decision-making. As AI systems increasingly influence our lives, the need for transparency and explainability in these systems becomes vital. Surveillance scholar David Lyon highlighted this concern in 2010, noting that with a “hard-wired Kafka effect,” individuals may face life-altering decisions, such as denial of credit or boarding passes, without understanding the reason or knowing they were under scrutiny in the first place.
In a world shaped by AI, Lyon’s insights seem more prescient than ever. As AI continues to make automated decisions, questions of transparency and explainability have surged to the forefront of public discourse. Without a clear framework, we risk recreating Kafka’s world—where the system governs us without accountability or recourse.
Transparency vs. Explainability: What’s the Difference?
Modern AI systems, particularly those based on deep learning algorithms, present unique challenges to transparency and explainability. The sheer complexity of these systems often makes it difficult to provide users with meaningful insights. Offering complete technical transparency might overwhelm users with information, most of which is incomprehensible to those without a deep technical background.
At its core, transparency is about accountability. Developers must ensure that users are aware when interacting with AI systems, understand which data the AI uses, and know the system’s limitations. However, it’s important to recognise that transparency to developers or regulators differs from transparency to the general public. Many lack the technical expertise to interpret the inner workings of AI. This tension between a technocratic elite and a lay public adds complexity to discussions of AI governance, as public policy must address this divide to be effective.
But as Lyon suggests, transparency alone solves only half the problem. Merely knowing that a system is making decisions about us, or having a vague idea of what those decisions might be, offers little comfort. Without specific, understandable explanations, transparency can become an empty gesture, one that sidesteps accountability rather than addressing it.
The Limits of Explainability: A Technical and Ethical Dilemma
Explainability, while necessary, faces inherent challenges. In certain types of AI, such as neural networks, decisions are often too complex to be fully explained in a way that makes sense to humans. While tools like SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-agnostic Explanations) attempt to open the black box, they remain imperfect. Their explanations can still leave key questions unanswered, raising doubts about whether AI decisions can ever be fully understood.
This predicament forces us to confront trade-offs. In high-stakes situations—such as medical diagnostics or legal judgments—the need for explainability is paramount. However, increased explainability often comes at the cost of accuracy. Simpler, more interpretable models may not be as precise. This presents both a technical and ethical challenge: Should we prioritise human understanding over AI performance?
This dilemma underscores the importance of finding a balanced regulatory framework, what we Swedes would call ‘lagom’—regulation that is just right; neither too restrictive nor too permissive. The European Union’s AI Act is one of the first major attempts to strike this balance. By adopting a risk-based approach, the AI Act mandates greater transparency and explainability for high-risk AI systems, while allowing more leniency in lower-stakes contexts.
Rethinking Our Relationship with AI and Power
It is clear that AI transparency and explainability are not purely technical issues. While the technical aspects are challenging, the real test lies in how society navigates the ethical and social implications. As AI continues to shape the future, we must not only ask how these systems work, but also for whom they work and to what end.
If we fail to address these deeper questions, we risk following in the footsteps of Josef K., trapped in an automated system that operates without clear purpose or accountability. To avoid this, we must strive for AI systems that are both transparent and explainable, not just in technical terms, but in ways that serve the broader interests of society. Only then can we hope to prevent the Kafkaesque scenario from becoming our reality.
About the writer
Dr. Eric J. W. Orlowski is a Research Fellow with AI Singapore’s Governance team, focusing on how AI adoption strategies are translated from theory to practice. He holds a Ph.D. in Social & Cultural Anthropology from University College London and has been researching the social impacts of emerging technologies since 2018.
The views and recommendations expressed in this article are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.