By Foo Xin Yi, Kelly, Tech for Good Institute
As Southeast Asia undergoes rapid digital transformation, technology is becoming deeply entrenched in society and everyday life. These digital transformation efforts have resulted in increased internet penetration rates across the region, triggering a rise in social media usage as more individuals gain access to online platforms. As services, communication methods, and entertainment options move online, the rise of smartphones and digital connectivity enables individuals to access social media anytime, leading them to turn to it for interaction and information, further integrating it into their routines. Notably, Singapore possesses the highest social media penetration rate in Southeast Asia at 85%.
Resultantly, as technology such as mobile devices, the Internet and social media platforms become incorporated into everyday functions, individuals are also increasingly being subjected to the dangers these technologies pose. Most pertinently, these dangers manifest in harmful online content on social media platforms. Earlier this year, the Ministry of Digital Development and Information (MDDI) released survey findings revealing that two-thirds (66%) of respondents have encountered harmful online content on social media platforms, with cyberbullying accounting for the majority of such content.
The impact of social media’s rising ubiquity and the harmful online content it fuels is strikingly clear. A recent study by the Institute of Mental Health (IMH) discovered that 1 in 3 young people in Singapore have experienced ‘severe or extremely severe symptoms of depression, anxiety and/or stress.’ The study further pinpointed excessive use of social media and cyberbullying as key factors contributing to this phenomenon. With digitalisation initiatives continuing to accelerate, the implications for youths and their mental health are unsettling, as exposure to social media will undoubtedly increase. If social media platforms are left under-regulated, this vulnerable community will remain susceptible to harmful online content, exacerbating the damage done to youths’ mental health. Additionally, equity is obscured as the negative impact of social media usage disproportionately affects females — previous research conducted by the World Health Organisation (WHO) has shown that problematic social media usage is more prevalent among females as compared to their male counterparts. This impedes the pursuit of sustainable and equitable growth of the digital society, ultimately hindering the overall well-being of future generations and limiting their potential to thrive in an increasingly interconnected world
However, it is also important to offer nuance and clarify that social media usage is not inherently detrimental to youths’ mental health — excessive usage of social media and under-regulated platforms are. Social media can confer benefits such as providing youths with opportunities for learning, serving as an invaluable tool to combat isolation, and acting as a medium for self-expression.
Current Efforts to Mitigate Online Harms
It is evident that the Singaporean government is aware of the need to address this pressing concern as they proceed with the second phase of digital transformation. In the newly published Smart Nation 2.0 report, policymakers acknowledge the adverse effects online harms such as cyberbullying have on the safety and mental well being of youths, unveiling a series of strategies to tackle the growing problem. Most prominently, the report announced the establishment of a new agency to aid victims in promptly curbing online harms. Victims of such harms, including cyberbullying, will be able to request assistance from the new agency, where the agency can subsequently act on their behalf to command perpetrators and social media platforms to stem the source of the harm. This circumvents long-drawn and tedious court proceedings to achieve the same outcome, briskly and permanently removing the damaging content. Alongside this initiative, a new law will also be enacted to improve the ability of victims to pursue civil remedies against perpetrators of online harms, with more details about the law expected to be shared in the near future.
Other existing regulatory efforts that will work in tandem with the new Smart Nation 2.0 initiatives include the Online Safety (Miscellaneous Amendments) Act and the Code of Practice for Online Safety introduced by the InfoComm Media Development Authority (IMDA). The former empowers IMDA to order social media platforms to remove harmful content, such as materials related to self-harm and child exploitation, with penalties for non-compliance, while the latter mandates that designated social media services enhance online safety. Similarly, the Online Criminal Harms Act broadens regulatory tools to combat online crimes, while the Protection of Harassment Act (POHA) enables victims to seek protection orders against harassment in both online and offline environments
Aside from the abovementioned actions from the public sector, the private sector — especially social media companies — is also working towards mitigating the harm done to youths’ mental health. Efforts by social media companies such as Meta and Tiktok include attempts to moderate abusive content online and present support resources, including hotlines, upon users’ request, or automatically based on search queries for negative content. They also equip users with a range of tools to control their social media experience, allowing them to utilise functions such as blocking, muting and restricting accounts to protect themselves against harmful content.
Artificial Intelligence (AI) is also being harnessed to confront online harms. Empathly, an AI tool developed in Singapore, was created to utilise AI to firstly, identify and categorise hateful comments, and subsequently send a behavioural nudge to users, before they publish their comment, to rethink the comment. This acts as a preemptive measure to address harmful content, eradicating hate speech and building safer online spaces for users.
Future Pathways for a Safer Digital Space
While there are a myriad of initiatives to ameliorate the situation, more can be done. In particular, social media platforms need to assume greater accountability for their impact on users.
Current initiatives are often inadequate — although many platforms have implemented measures such as reporting systems and tools for users to filter out negative content on their own, these are often reactive rather than preemptive. Despite platforms utilising AI to moderate content, moderation of harmful content remains a challenge, as users continuously discover new methods to bypass search filters. Social media companies should prioritise and significantly increase their investment in content moderation to create a safer online environment, be it through developing more thorough automated content moderation models, or investing in the hiring and training of moderators. Additionally, social media platforms need to better enforce age restrictions and create a robust regulatory framework that supports the responsible design of digital tools for young users.
The step that the Singaporean government has taken towards the better protection of youths from online harm as outlined in the Smart Nation 2.0 report is commendable. However, additional steps can be taken to further safeguard youths. As part of the Smart Nation 2.0 initiative, S$120 million will be dedicated to promote AI adoption through providing digital training for Singaporeans, honing their digital skills. While digital literacy is valuable, similar emphasis should be placed on providing training on the responsible usage of technology and social media for youths. By equipping individuals not only with technical skills, but also with an understanding of ethical online behaviour, we can ensure that youths will navigate the digital landscape safely and responsibly.
Public-private partnerships to address the issue of social media and the negative impact on youths’ mental health can also be further explored, whether it be through joint outreach programs to educate youths on safe social media practices, or through collaborative research involving social media companies and public health organisations to identify trends and design evidence-based strategies to mitigate risks. A notable example is the Inter-agency Taskforce on Mental Health and Well-being co-led by the Ministry of Health (MOH) and Ministry of Social and Family Development (MSF), which undertook public consultations to construct substantive guidelines on positive usage of social media for youths in light of the extensive presence of harmful online content.
By fostering a whole-of-society approach, stakeholders can create a multifaceted response that not only eradicates harm but also empowers young people to navigate social media responsibly and positively, enhancing their mental health and well-being.
Balancing Connectivity and Youth Mental Well-Being: Challenges of Social Media
A recent study by the Institute of Mental Health (IMH) highlighted that many young people in Singapore are experiencing significant mental health challenges, with excessive social media use and cyberbullying identified as major contributing factors to these issues.
By Foo Xin Yi, Kelly, Tech for Good Institute
As Southeast Asia undergoes rapid digital transformation, technology is becoming deeply entrenched in society and everyday life. These digital transformation efforts have resulted in increased internet penetration rates across the region, triggering a rise in social media usage as more individuals gain access to online platforms. As services, communication methods, and entertainment options move online, the rise of smartphones and digital connectivity enables individuals to access social media anytime, leading them to turn to it for interaction and information, further integrating it into their routines. Notably, Singapore possesses the highest social media penetration rate in Southeast Asia at 85%.
Resultantly, as technology such as mobile devices, the Internet and social media platforms become incorporated into everyday functions, individuals are also increasingly being subjected to the dangers these technologies pose. Most pertinently, these dangers manifest in harmful online content on social media platforms. Earlier this year, the Ministry of Digital Development and Information (MDDI) released survey findings revealing that two-thirds (66%) of respondents have encountered harmful online content on social media platforms, with cyberbullying accounting for the majority of such content.
The impact of social media’s rising ubiquity and the harmful online content it fuels is strikingly clear. A recent study by the Institute of Mental Health (IMH) discovered that 1 in 3 young people in Singapore have experienced ‘severe or extremely severe symptoms of depression, anxiety and/or stress.’ The study further pinpointed excessive use of social media and cyberbullying as key factors contributing to this phenomenon. With digitalisation initiatives continuing to accelerate, the implications for youths and their mental health are unsettling, as exposure to social media will undoubtedly increase. If social media platforms are left under-regulated, this vulnerable community will remain susceptible to harmful online content, exacerbating the damage done to youths’ mental health. Additionally, equity is obscured as the negative impact of social media usage disproportionately affects females — previous research conducted by the World Health Organisation (WHO) has shown that problematic social media usage is more prevalent among females as compared to their male counterparts. This impedes the pursuit of sustainable and equitable growth of the digital society, ultimately hindering the overall well-being of future generations and limiting their potential to thrive in an increasingly interconnected world
However, it is also important to offer nuance and clarify that social media usage is not inherently detrimental to youths’ mental health — excessive usage of social media and under-regulated platforms are. Social media can confer benefits such as providing youths with opportunities for learning, serving as an invaluable tool to combat isolation, and acting as a medium for self-expression.
Current Efforts to Mitigate Online Harms
It is evident that the Singaporean government is aware of the need to address this pressing concern as they proceed with the second phase of digital transformation. In the newly published Smart Nation 2.0 report, policymakers acknowledge the adverse effects online harms such as cyberbullying have on the safety and mental well being of youths, unveiling a series of strategies to tackle the growing problem. Most prominently, the report announced the establishment of a new agency to aid victims in promptly curbing online harms. Victims of such harms, including cyberbullying, will be able to request assistance from the new agency, where the agency can subsequently act on their behalf to command perpetrators and social media platforms to stem the source of the harm. This circumvents long-drawn and tedious court proceedings to achieve the same outcome, briskly and permanently removing the damaging content. Alongside this initiative, a new law will also be enacted to improve the ability of victims to pursue civil remedies against perpetrators of online harms, with more details about the law expected to be shared in the near future.
Other existing regulatory efforts that will work in tandem with the new Smart Nation 2.0 initiatives include the Online Safety (Miscellaneous Amendments) Act and the Code of Practice for Online Safety introduced by the InfoComm Media Development Authority (IMDA). The former empowers IMDA to order social media platforms to remove harmful content, such as materials related to self-harm and child exploitation, with penalties for non-compliance, while the latter mandates that designated social media services enhance online safety. Similarly, the Online Criminal Harms Act broadens regulatory tools to combat online crimes, while the Protection of Harassment Act (POHA) enables victims to seek protection orders against harassment in both online and offline environments
Aside from the abovementioned actions from the public sector, the private sector — especially social media companies — is also working towards mitigating the harm done to youths’ mental health. Efforts by social media companies such as Meta and Tiktok include attempts to moderate abusive content online and present support resources, including hotlines, upon users’ request, or automatically based on search queries for negative content. They also equip users with a range of tools to control their social media experience, allowing them to utilise functions such as blocking, muting and restricting accounts to protect themselves against harmful content.
Artificial Intelligence (AI) is also being harnessed to confront online harms. Empathly, an AI tool developed in Singapore, was created to utilise AI to firstly, identify and categorise hateful comments, and subsequently send a behavioural nudge to users, before they publish their comment, to rethink the comment. This acts as a preemptive measure to address harmful content, eradicating hate speech and building safer online spaces for users.
Future Pathways for a Safer Digital Space
While there are a myriad of initiatives to ameliorate the situation, more can be done. In particular, social media platforms need to assume greater accountability for their impact on users.
Current initiatives are often inadequate — although many platforms have implemented measures such as reporting systems and tools for users to filter out negative content on their own, these are often reactive rather than preemptive. Despite platforms utilising AI to moderate content, moderation of harmful content remains a challenge, as users continuously discover new methods to bypass search filters. Social media companies should prioritise and significantly increase their investment in content moderation to create a safer online environment, be it through developing more thorough automated content moderation models, or investing in the hiring and training of moderators. Additionally, social media platforms need to better enforce age restrictions and create a robust regulatory framework that supports the responsible design of digital tools for young users.
The step that the Singaporean government has taken towards the better protection of youths from online harm as outlined in the Smart Nation 2.0 report is commendable. However, additional steps can be taken to further safeguard youths. As part of the Smart Nation 2.0 initiative, S$120 million will be dedicated to promote AI adoption through providing digital training for Singaporeans, honing their digital skills. While digital literacy is valuable, similar emphasis should be placed on providing training on the responsible usage of technology and social media for youths. By equipping individuals not only with technical skills, but also with an understanding of ethical online behaviour, we can ensure that youths will navigate the digital landscape safely and responsibly.
Public-private partnerships to address the issue of social media and the negative impact on youths’ mental health can also be further explored, whether it be through joint outreach programs to educate youths on safe social media practices, or through collaborative research involving social media companies and public health organisations to identify trends and design evidence-based strategies to mitigate risks. A notable example is the Inter-agency Taskforce on Mental Health and Well-being co-led by the Ministry of Health (MOH) and Ministry of Social and Family Development (MSF), which undertook public consultations to construct substantive guidelines on positive usage of social media for youths in light of the extensive presence of harmful online content.
By fostering a whole-of-society approach, stakeholders can create a multifaceted response that not only eradicates harm but also empowers young people to navigate social media responsibly and positively, enhancing their mental health and well-being.
Download Report
Download Report
Latest Updates
Navigating AI Governance and Ethics Across ASEAN
Leveraging Digital Platforms for Public Benefit
Sandbox to Society: Fostering Innovation in Southeast Asia
Navigating AI Governance and Ethics Across ASEAN
The Power of Language Diversity in the AI Era
Malaysia’s Journey Towards AI Literacy
Tag(s):
Digital Society, Singapore