Navigating the AI Landscape: Insights on ethical challenges, trends, and future work

As artificial intelligence (AI) takes center stage with its recent developments in generative AI, the Tech For Good Institute interviewed Su Lian Jye, Chief Analyst for Applied Intelligence at Omdia to share his insights into the intricate web of AI deployment, ethical considerations, and emerging trends.

[TFGI] In a nutshell, can you explain to us what AI is? 

AI is the simulation of human intelligence by machines, achieved through various methods. The basic type is rule-based, where machines follow set rules designed by experts. They are used in business software for things like automating processes, checking for defects in factories, keeping things secure online, and in banking and insurance. This is called Robotics Process Automation (RPA)

As data becomes more accessible, machine learning, which is training machines using data instead of telling them what to do step by step has become popular. Machine learning engineers employ a range of model types such as the ones that predict outcomes or learn patterns.

The subset of machine learning known as deep learning It is akin to using networks of connections in machines that copy how the human brain works. From this, generative AI comes out – it is a big part of deep learning. Unlike other machine learning methods, generative AI can craft novel outputs, spanning text and conversation, images and videos, software code, and even music.

[TFGI] What ethical challenges can arise when developing AI systems, and how can they be addressed?

With AI’s expansion into diverse industries, it introduces ethical quandaries with serious consequences:

  1. Lack of Transparency: Many current AI solutions often operate as black boxes, lacking transparency in both their decision-making processes and outputs. In this context, the essential factor is to establish transparency, and organisations such as Grab, Google, Microsoft and TikTok  have taken steps to enhance trust in their systems by implementing algorithmic explainers.
  2. Weak Neutrality: Although neutral in theory, AI’s decision-making relies on quality training data. As the saying goes, garbage in, garbage out. Here, biased data can result in biased outcomes, fostering inequality.
  3. Single Point of Failure: We often tend to have complete confidence in the reliability and dependability of AI. It is essential for enterprises not to make AI the single point of failure, especially when the AI system is designed to handle decisions that can have ethical and social impacts, such as in areas of recruitment or public safety. Here, safeguards and frameworks are crucial.
  4. Workforce Adaptation:Task automation is unavoidable with the adoption of AI. Enterprises must consider how to reskill and upgrade their workers so that they recognise AI as an augmentation to their existing job instead of seeing AI as a threat. This requires complete alignment from senior leadership to the workers working alongside the automated systems, where both sides must have constant communication and feedback to ensure full transparency.
  5. Inappropriate Outputs: The misuse of AI by malicious entities, amplified by generative AI, can yield harmful misinformation. Here, vigilant monitoring is pivotal.

[TFGI] What do you think are some of the most important considerations that organisations should keep in mind before embarking on their AI journeys?

AI deployment is a complex process, encompassing various disciplines:

  1. DataOps: When it comes to handling Data Operations (DataOps), enterprises need to be on the ball. This means taking a close look at their current data resources and finding the gaps. Building a strong foundation for data collection and processing is essential. This involves setting up a comprehensive pipeline that covers everything from data ownership and storage to analysis and ethical disposal. Of course, following the rules for data protection, privacy, and ownership is a given. 
  2. AIOps: Once DataOps is set, meticulous organisation of AI workflows or AI Operations (AIOps) is key. Accurate, high-performing AI models require adept AI training and testing pipeline implementation. After model development, understanding the AI system’s specific operating environment is essential to adapt deployment strategies. Given AI models’ susceptibility to performance shifts, continuous retraining is crucial. 
  3. AI Infrastructure: Think of AI Infrastructure as the backbone of the whole operation. This includes everything from servers, chipsets, and storage to memory, containers, AI models, tools, and monitoring systems. As generative AI gains more attention, investing in high-performance hardware is a strategic move. But, as with any investment, careful planning is crucial to stay within budget.
  4. Organisational Culture: Firm support for AI requires senior leadership’s investment and patience. Here ensuring principled AI creation, alignment of AI key performance indicators with broader business goals as well as mitigating AI-related risks through education is key. 

[TFGI] How do we resolve the problem of AI alignment—ensuring that AI systems’ goals align with human values and intentions?

This is quite a challenge, considering that addressing the AI alignment issue is an intricate endeavor. However, here are a few strategies to approach this:

Proactive Strategy: Proactivity is pivotal in the realm of AI. As AI developers, technology users, and regulators, we play a central role in anticipating forthcoming trends, risks, and gaps. It is akin to peering into the future and having a prepared course of action. Here, the  focus should revolve around presenting a blend of regulatory and non-regulatory solutions to bolster our technological infrastructure, establish the right incentives, and mitigate potential risks.

Active Engagement in Policy Discussions: Imagine a roundtable where every perspective matters. Whether deeply entrenched in AI development intricacies, utilising AI tools, or closely monitoring the landscape, every voice counts. Participation in public policy dialogues and establishing connections with global AI Standard Development Organisations (SDOs) is where substantial progress occurs. On this front, it is also heartening to see ASEAN coming together to produce a set of guidelines on the responsible use of artificial intelligence (AI) in the region, the “ASEAN Guide on AI Governance and Ethics” which will be released in early 2024 to  balance the economic benefits of the technology with its many risks.

Raising Public Awareness: This is the phase where information dissemination takes center stage. Regulators and policymakers are dedicated to enlightening both AI developers and users. These approaches can be via formal channels such as tertiary institutions’ research programs, boot camps, and hackathons or informal channels such as AI programs from enterprises and industrial associations. 

Consultation with Generative AI Stakeholders: Regulators need to dive deep into discussions with local generative AI developers and users. By understanding their needs, plans, and concerns in the world of generative AI, we are setting the stage for a well-informed approach.

[TFGI] In your opinion, whose primary responsibility is it when AI fails: the developer who created the system, the user who deployed it, or the regulator overseeing its use? 

From my perspective, the question of who’s on the hook when AI goes awry is a shared responsibility among the developer, the user, and the regulatory bodies overseeing its use. Balancing accountability and fostering innovation in the swiftly evolving AI landscape is quite the challenge. At present, AI regulations seem to be the most effective tool for managing AI ethics. However, implementing strict regulations prematurely could impede innovation. To strike the right equilibrium, I propose looking towards AI standards, especially those tailored for specific industries. Unlike top-down regulations, these industry-driven standards offer a more balanced approach. Standards related to generative AI, championed by Standard Development Organisations (SDOs), could garner global recognition and encourage widespread implementation. Additionally, governments can leverage the insights and expertise of both the private sector and AI developers in navigating this complex terrain.

[TFGI] What emerging AI trends do you think will have the most profound impact on society in the next 5 to 10 years, and how should individuals and organisations ready themselves for these transformations?

5 to 10 years are light years away, particularly for an ever-changing technology like AI. AI is likely to continue gaining success stories in the enterprise. As such, AI solution specialisation will likely increase to accommodate more industry-specific or horizontal use case needs. As generative AI becomes more common, prompt engineering and fine-tuning will be the most critical skill set. We are likely to witness AI knowledge as an essential skill set in the job market. Zero-code implementation will also become available for foundation AI and large language models that are currently technically complicated. New AI techniques, like federated learning and swarm learning, will facilitate data sharing and collaboration.

About:

Su Lian Jye is a chief analyst at Omdia,  responsible for the orchestration of applied intelligence market research in the Asia & Oceania region. He provides strategic guidance and business intelligence on AI infrastructure, software, and services. 

Omdia is part of Informa Tech, is a technology research and advisory group, specialising in global coverage of telecommunications, media, and technology.

The views and recommendations expressed in this article are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.

Download Agenda

Download Report

Latest Updates

Latest Updates​

Keep pace with the digital pulse of Southeast Asia!

Never miss an update or event!

Mouna Aouri

Programme Fellow

Mouna Aouri is an Institute Fellow at the Tech For Good Institute. As a social entrepreneur, impact investor, and engineer, her experience spans over two decades in the MENA region, South East Asia, and Japan. She is founder of Woomentum, a Singapore-based platform dedicated to supporting women entrepreneurs in APAC through skill development and access to growth capital through strategic collaborations with corporate entities, investors and government partners.

Dr Ming Tan

Founding Executive Director

Dr Ming Tan is founding Executive Director for the Tech for Good Institute, a non-profit founded to catalyse research and collaboration on social, economic and policy trends accelerated by the digital economy in Southeast Asia. She is concurrently a Senior Fellow at the Centre for Governance and Sustainability at the National University of Singapore and Advisor to the Founder of the COMO Group, a Singaporean portfolio of lifestyle companies operating in 15 countries worldwide.  Her research interests lie at the intersection of technology, business and society, including sustainability and innovation.

 

Ming was previously Managing Director of IPOS International, part of the Intellectual Property Office of Singapore, which supports Singapore’s future growth as a global innovation hub for intellectual property creation, commercialisation and management. Prior to joining the public sector, she was Head of Stewardship of the COMO Group and the founding Executive Director of COMO Foundation, a grantmaker focused on gender equity that has served over 47 million women and girls since 2003.

 

As a company director, she lends brand and strategic guidance to several companies within the COMO Group. Ming also serves as a Council Member of the Council for Board Diversity, on the boards of COMO Foundation and Singapore Network Information Centre (SGNIC), and on the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

In the non-profit, educational and government spheres, Ming is a director of COMO Foundation and Singapore Network Information Centre (SGNIC) and chairs the Asia Advisory board for Swiss hospitality business and management school EHL. She also serves on  the Council for Board Diversity and the Digital and Technology Advisory Panel for Esplanade–Theatres on the Bay, Singapore’s national performing arts centre.

 

Ming was educated in Singapore, the United States, and England. She obtained her bachelor’s and master’s degrees from Stanford University and her doctorate from Oxford.