
[TFGI] To start, can you tell us a bit more about SG Her Empowerment and your role within the organisation?
Founded in 2022, SHE is a non-profit organisation focused on empowering women and girls through impactful community engagement and partnerships. Our vision is a Singapore where women and men are equals in the home, workplace, and community. Regionally and globally engaged, SHE represents Singapore’s perspectives on pressing gender equality issues to a wider audience.
A key focus for us is online harms. We established Singapore’s first support centre for victims of online harms, SHECARES@SCWO, which offers a holistic range of services including free counselling sessions and pro bono legal clinics, working with the major Internet platforms and the authorities to remove harmful content and obtain protection orders.
I am the Chief Operating Officer (COO), so I oversee the day-to-day operations at SHE and work closely with our leadership team to execute our ideas and implement our plans. I also make sure the team is taken care of!
[TFGI] As AI technology evolves rapidly, we’re seeing a gap in regulation, leading to the rise of online harms such as deepfakes and cyberbullying. In your opinion, how has the misuse of AI contributed to the amplification of misogyny in digital spaces?
For all the good it can do, AI has also been abused by bad actors to enable the mass creation and dissemination of harmful content targeting women. Deepfake technology, for example, has been disproportionately weaponised against women, with manipulated intimate images being used for harassment, blackmail, and reputational harm.
AI-driven recommendation algorithms can also perpetuate gender biases by amplifying misogynistic content, normalising harmful narratives, and increasing the visibility of extremist views. Additionally, AI-powered chatbots and social media bots have been exploited to spread hate speech and cyberbully women at scale. The lack of robust safeguards in AI development and deployment has made it easier for bad actors to target women while remaining anonymous, exacerbating the challenges of online safety.
[TFGI] Recent statistics from the SHECARES@SCWO Centre (Singapore’s first support centre dedicated to online harm victims) show that 90% of online harm victims are women, with half of them being 35 years old or younger. What legal frameworks do you think need to be strengthened or introduced to better regulate AI misuse, particularly in cases affecting these women?
Singapore is already a first mover in updating our legislation to tackle online harms. The Online Safety Act and Online Criminal Harms Act were significant legal moves that set the bar at the highest level for what kind of standards Singapore society should live by. However, we need to do more, and more quickly, to keep up with how rapidly technology is evolving.
Recently, the government announced that it is setting up a new agency for online safety and assurance, which sends a clear signal that as a society, we cannot condone harms in our online spaces. Among the new legal reforms proposed, there are efforts to address the misuse of inauthentic material, such as deepfakes, as a distinct class of harm.
Laws alone cannot eradicate and solve a social ill. We need to establish multi-stakeholder partnerships between policymakers, law enforcement, tech companies, and civil society groups such as us to address evolving AI threats more effectively and build healthier norms of engagement and trust online.
[TFGI] At the 2025 SHE Symposium, the topic of online harms, including misogyny, was discussed, highlighting how these issues have become increasingly normalised in digital spaces. In your view, how can we challenge these deeply entrenched societal norms and address the culture of harm, both online and offline?
I feel that it really starts with education from a young age. We should introduce digital literacy and gender sensitivity programmes in schools and later on in workplaces, to help individuals recognise and challenge misogyny whether online or offline. We should also promote active bystander intervention so people can call out and report harms when they witness them.
Additionally, we should also support initiatives that elevate and amplify women’s voices in digital and policy spaces, ensuring women have the agency to shape and support online safety discussions. These women will also serve as role models for the next generation, inspiring them to face these fast evolving issues head on.
[TFGI] What specific measures do you think should be implemented to uphold higher ethical standards in AI development and deployment?
I am not an expert in AI tools and their development, but I would think that some basic governance frameworks must be in place. These would include regular and frequent auditing to detect and mitigate gender biases in dataset and algorithms, establishing guidelines for both developers and users that promote fairness, accountability, and transparency, and ensuring the platforms have features that allow users to report and block harmful content.
AI cannot fully replace humans, and the human touch must remain in place when building these platforms. Development teams must include diverse voices, particularly women and vulnerable groups, to prevent biases, and human oversight cannot be completely absent, especially when it involves moderating harmful content.
[TFGI] In your view, what role should policymakers play in ensuring accountability for the misuse of AI technologies?
Policymakers play a critical role through implementing legal frameworks that clearly define liability for harms and hold both perpetrators and platforms accountable. They also create enforcement mechanisms such as dedicated agencies to oversee AI ethics compliance and investigate violations. Policymakers also need to work with tech platforms on establishing and maintaining transparency, particularly in content moderation and response times to user reporting.
Promoting international cooperation is another crucial step for policymakers. This involves sharing resources and information, as well as collaborating with global counterparts to establish governance standards that address cross-border harms.
[TFGI] Finally, how do you think the proactive involvement of women in tech and leadership roles can influence AI development to prioritise gender inclusivity and online safety?
Women bring diverse perspectives that help identify and mitigate biases in AI models, ensuring that technology does not reinforce existing inequalities. By having more women in decision-making roles, AI systems can be designed with built-in safeguards to protect against gendered online harms and involve initiatives that focus on inclusive innovation. Encouraging more women to enter and thrive in AI-related fields will help create a digital ecosystem that values fairness, safety, and equal representation.
About the writer
Kay Lii How is the Chief Operating Officer (COO) at SG Her Empowerment (SHE). As COO, she drives operational excellence, oversees strategic planning, and ensures the effective execution of organisational goals. In addition, she leads the programmes team and manages a suite of initiatives aimed at addressing online harms and empowering women and girls. This includes facilitating impactful conversations and driving mindset change on societal attitudes towards the roles of women at home, in the workplace, and within the community.
About the organisation
SG Her Empowerment (SHE) is an independent non-profit organisation dedicated to empowering girls and women through community engagement and partnerships. By taking a data-driven approach, SHE identifies key issues and develops strategies for impactful change. The organization collaborates with diverse stakeholders, including community groups, corporates, and government, to advocate for a more equal and inclusive society.