Criminalising Offensive Speech Made by AI Chatbots in Singapore

In collaboration with LawTech.Asia, this article examines the criminal liability of offensive speech made by AI Chatbots in Singapore and how a negligence-based framework may be used to deter such harm without stifling innovation.

The less control one has over autonomous algorithms, “the greater the need… to exercise care in the development and deployment of those algorithms”. This seems to be the position taken in recent court decisions dealing with nascent issues arising from the use of automated systems. 

What happens if an autonomous chatbot starts making offensive speech? Who then do we look towards?

Where the burden of criminal liability is concerned, to whom the fingers of blame should point is a tricky question to answer. A deterrent effect naturally cannot be achieved by holding bots, which have neither agency nor capacity for moral culpability, to such liability. This is because criminal law is instead meant to govern human conduct. For these reasons, the only appropriate candidates to be potentially responsible for any offensive speech made by an autonomous chatbot are its human actors: its developers, deployers, third-party users, or a mix of any of them.

In order to strike a fair balance, it may be worthwhile considering a negligence-based framework under which a duty of care is imposed on the human actors. 

Responsibility of offensive speech on user or developer

An intention-based liability is either difficult to establish, or when construed too widely, may be too onerous on the human actors. In contrast, a negligence-based regime imposes a duty of care on a person, such that any offensive speech caused by a rogue chatbot may be attributed to negligent human conduct. 

The next question posed to us is the standard of negligence that human actors should be held to.

Courts might apply existing criminal negligence standards or create new ones, but this approach is inherently uncertain and could inadvertently stifle innovation in AI technology. An alternative solution proposed in the 2021 Report on Criminal Liability, Robotics, and AI Systems is to define the standard of conduct more precisely in sector- or technology-specific legislation. However, this approach also faces challenges due to the impracticality of legislating standards for all possible AI applications and risks.

In the absence of clear negligence standards, we can turn to common law for guidance. Precedents like the case of Ng Keng Yong v PP demonstrate that standards for the appropriate duty of care are consistent in both criminal and civil cases. Hence, one may be able to draw references from existing civil cases related to negligence-based offences.

In the specific context involving a human developer or deployer of an autonomous chatbot, a duty may be imposed to ensure that their chatbots are not reasonably likely to turn rogue and make offensive speech. The standard of care should be aligned with the current state of technology, taking into account available safeguards before deployment.

Currently, developers may rely on AI-powered filters trained with deep learning technology to detect offensive speech. These filters can remove inappropriate language or redirect the conversation with appropriate responses. However, a shortcoming of this technology is that the meaning of words often hinge on intricate contextual cues and the diverse interpretations rooted in various cultures, presenting a formidable challenge for AI systems. To illustrate the extent of this complexity, consider Southeast Asia, where a tapestry of over 1000 languages is spoken among more than 100 ethnic groups. Within the Indonesian language alone, there exist 13 distinct expressions for the word “I”.

A negligence-based framework 

There is increasing research into “explainable AI” and other “traceability-enhancing” tools which provide insight into why AI systems, like chatbots, make certain decisions. As these tools become more advanced, they may also fall within a developer’s duty of care. 

When it comes to users intentionally making a chatbot say offensive things, there’s no doubt about their criminal liability. However, the situation gets more complex when users unintentionally induce a chatbot to say something offensive. In such cases, they should not be labelled as negligent as they could not have predicted all the unpredictable ways a chatbot might respond in its interactions with numerous users. This risk of being held liable might discourage people from using chatbots altogether.

Over time, we may see the development of a more robust framework for determining responsibility. This framework could eventually be established as statutory law, as we become more familiar with AI’s behaviour patterns and what we can reasonably expect from individuals when they interact with AI systems. In the future, there might be clearer guidelines for determining who should be held responsible for AI’s behaviour, especially in cases where it’s not intentional misconduct.

Mitigation of risks

Apart from criminal sanctions, regulatory sanctions may sometimes be more appropriate in plugging the regulatory gap without imposing excessive burdens. These may include suspension of licenses, civil financial penalties, improvement notices, or withdrawals of approval. 

It is important to recognise that the market can also play a role in self-regulation. Companies have a strong incentive to exercise caution when deploying chatbots to avoid tarnishing their reputation. However, relying solely on self-regulation may not be sufficient because corrective actions often come too late, after irreversible harm has been caused by offensive speech.

As AI systems continue to evolve and become more complex, their operation may become more technically difficult to comprehend. Nevertheless, there is optimism that Singapore’s robust regulatory framework can strike a balance between deterring harmful behaviour and avoiding stifling innovation. It can adapt to address new developments effectively while safeguarding against potential risks.

In conclusion, the evolving landscape of autonomous algorithms and AI chatbots presents a pressing need for careful development and deployment. When tackling offensive speech from autonomous chatbots, the critical task is identifying responsible parties. As criminal liability does not apply to morally-neutral bots, attention naturally turns to their human counterparts – developers, deployers, and users. As AI evolves, it is hoped that a more robust responsibility framework may emerge, guided by statutory law and clearer guidelines. 

The views and recommendations expressed in this article are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute. 

This article was edited and  first published on the LawTech.Asia blog on 22 August 2022. 

About LawTech.Asia

LawTech.Asia is a virtual publication driving thought leadership on law and technology in Asia. Established in 2016 by an all-volunteer group of young lawyers and law students passionate about deepening trust between the legal profession and technology, LawTech.Asia has gone on to serve as consultants and partners to such organisations as the Supreme Court of Singapore, the Commonwealth, and the Singapore Academy of Law. 

Download Report

Download Report

Latest Updates

Latest Updates​

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.

 

Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.

 

As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.