
By Poon King Wang, Chief Strategy and Design AI Officer, Director (Lee Kuan Yew Centre for Innovative Cities), SUTD and Dinithi Jayasekara, Research Fellow, Lee Kuan Yew Centre for Innovative Cities, SUTD
Mobility in Motion: Many Developments, Many Futures Beyond Ride Hailing
Artificial intelligence is no longer just routing your ride—it is reshaping the entire logic of urban mobility. From autonomous vehicles that retrain in minutes to ride-hailing apps that anticipate demand block by block, AI is now embedded in every layer of how people move.
Autonomous vehicles (AVs) are no longer a hypothetical future. In cities like Phoenix, San Francisco, Austin, and Wuhan, they are already operating on public roads—guided by increasingly sophisticated AI systems that integrate camera, lidar, radar, and simulation-trained models. AV providers have made significant strides in technical performance, with crash rates for leading fleets now well below human baselines.
To build public trust, many AV firms and governments have focused on transparency and regulatory compliance. For example, Waymo regularly publishes detailed safety reports and, in 2025, responded to a regulatory inquiry by issuing a software patch and public-facing FAQ within 48 hours—recalling over 1,200 vehicles in the process.
In Singapore, geo-fenced trials in Sentosa allow citizens to directly experience the technology. Singapore’s upcoming public AV shuttle deployment in Punggol—scheduled for late 2025—signals that the country is transitioning from pilot zones to daily use cases. But this shift from trial to trust hinges on how well systems meet individual and community expectations. At the global level, standards such as ISO 39003 and UNECE WP.29 are also shaping how AVs communicate with users and maintain cyber-security —critical foundations for fostering public confidence.
In the smart mobility sector, AI-powered ride-hailing apps have evolved far beyond simply booking a car—they now shape core aspects of the user experience, including pricing, safety, and service transparency. Platforms like Uber, Grab, Lyft, and Gojek use artificial intelligence to predict demand, adjust fares dynamically, monitor trip anomalies, and explain platform decisions in real time.
To strengthen user trust, most major platforms have introduced features such as surge-price explainers, which clarify how pricing responds to supply, demand, and local events; in-ride safety AI like Grab’s “Audio Protect,” which records and encrypts trips for security; and reciprocity dashboards, such as Gojek’s carbon credit tracker that shows how riders can reduce CO₂ emissions through the GoGreener Tree Collective programme. Many platforms now also deploy AI-powered chat assistants to handle customer service in a more responsive and human-like manner.
As these systems scale, the question shifts from “Can AI drive?” to “Will people trust how it drives—and who benefits from it?”.
From Capability to Credibility: Trust as the Bottleneck
Even as AI systems become more capable and regulations more robust, adoption remains uneven—and in some cases, has stalled entirely. The core challenge is no longer whether AI can perform the task, but whether people trust it to act in their best interest. In both autonomous vehicles and ride-hailing platforms, public confidence has not kept pace with technical innovation. What we face today is not a gap in capability—but a gap in credibility.
General Motors’ decision to defund Cruise— its self-driving robotaxi subsidiary, underscores the persistent trust gap. Even in Singapore, where the first AV trials were approved in 2015, deployments remain limited to pilot programs nearly a decade later. While Singapore plans to launch public AV shuttles by late 2025, these will include human stewards on board during initial operations—a tangible signal that, even as technical reliability improves, visible human presence remains vital for reassuring early adopters and addressing trust at a personal level. This continued caution is particularly striking given AVs’ potential contributions to sustainability, road safety, and urban efficiency. The problem is not a lack of capability—but a lack of public confidence. While safety metrics and liability frameworks are necessary, they are not sufficient. Trust, especially at the individual level, is proving to be the key bottleneck to widespread adoption.
Enabling Trust in AI Mobility:
The gap between technical capability and personal relevance is less studied and understood. At the Lee Kuan Yew Centre for Innovative Cities, we conducted a series of behavioural economics experiments to understand what makes people trust—or distrust—AI systems and the organisations behind them. Key insights include:
- Foster Societal and Institutional Trust Alongside AI Adoption.
- Ensure AI Benefits Are Equitably Distributed.
- Design AI Systems to Be Socially Oriented and Consistent.
- Implement Safeguards to Prevent AI Exploitation.
- Enable Customisation for Increased User Control and Trust.
- Incorporate Emotional Intelligence in AI Error Management.
- Establish a Clear Vision for AI’s Societal Benefits.
- Adopt and Align with Institutional and International Standards.
- Prioritise Explainability Over Performance in AI Communication.
Our findings consistently show that trust is not just a function of technical performance. It is fundamentally a human issue—shaped by perceived fairness, emotional resonance, and the sense that the system “sees” and adapts to individual preferences.
Three Futures of Trust: Where AI Mobility Might Go
Looking ahead, the future of AI-powered mobility will hinge less on technological capability and more on whether systems earn and sustain public trust. Based on current trends, three trust trajectories are plausible—each shaped by how platforms balance performance, personalisation, and accountability.



In the “Cost-convenience plateau”scenario, platforms continue to optimise for speed and affordability, but make limited progress on personalisation or user control. While AVs and smart mobility apps offer efficient and low-cost service, adoption plateaus as privacy- and safety-conscious users churn or opt out entirely. The system works—but only for some.
A more promising outcome lies in “Reciprocal mobility,” where trust is embedded by design. In this scenario, apps implement portable trust profiles that allow users to pre-set preferences related to privacy, environmental impact, safety speed, and accessibility. Monthly “mobility dividend” statements show how each user’s data contributed to wider benefits—like reducing congestion or emissions—and what value they received in return. As a result, adoption rises substantially, and the trust gap between high- and low-trust users narrows meaningfully.
However, a third—and more cautionary—path is what might be termed “Algorithmic backlash.
In this scenario, firms risk overstepping by implementing hyper-personalised pricing and behavioural targeting without adequate transparency or meaningful user consent. These practices can significantly erode public trust. In more extreme cases, they may trigger legal challenges and calls to ban behavioural pricing altogether.
To avoid this future and steer confidently toward reciprocal mobility, developers and regulators must treat trust as a first-order design goal, not a reactive communications strategy. It must be baked into product architecture, interface design, data governance, and system-level accountability from day one.
Navigating Toward Trusted Mobility: Multistakeholder design, Personal Needs, Systemic Design
People are more likely to adopt AI systems when they perceive personal relevance, control, and reciprocity—not just technical competence. While safety dashboards offer statistical assurance, trust is ultimately experienced at the individual level.
For example, a parent may still avoid using an autonomous vehicle (AV) for the school run if they cannot enable a kid-friendly mode. A privacy-conscious professional might reject an AV that fails to offer transparent, opt-in data-sharing options. Similarly, for ride-hailing services, trust is shaped by how well the platform accommodates specific needs. A wheelchair user may require certainty that an accessible vehicle will arrive, while an eco-conscious commuter may be more inclined to choose an AV over a fuel-based taxi—but only if the environmental impact is clearly communicated.
To guide developers and policymakers toward more inclusive and trustworthy AI mobility solutions, we outline below four recurring user personas as examples—spanning both AVs and ride-hailing apps—each with distinct needs that can be addressed through thoughtful customisation.
Conclusion: From Smart to Trusted Mobility
The future of smart mobility will not be determined solely by algorithms, vehicles, or infrastructure, rather by whether people feel safe, seen, and respected by the systems that move them. Trust must become a first-order design principle, embedded not just in the technical stack, but in every rider interaction, policy decision, and governance layer. Crucially, it is not the task of one actor alone. It requires a multi stakeholder approach where the government, the private sector, and the public each play a key role. Regulators can strengthen confidence in AI driven smart mobility solutions through sandbox frameworks while smart mobility providers could personalise design to meet diverse user needs. And users themselves must be empowered to make informed choices through transparent and inclusive interfaces. When we design AI systems that adapt to individual needs and serve collective good, we move from mobility that is merely smart to mobility that is truly trusted. That is the journey worth taking—and the one Singapore is uniquely positioned to lead.
About the Writers
King Wang POONis the Chief Strategy and Design AI Officer, Office of Strategic Planning at the Singapore University of Technology and Design (SUTD), and concurrently the Director of Lee Kuan Yew Centre for Innovative Cities, where he heads the Smart Cities Lab and Future of Innovation Lab.
Dinithi JAYASEKARA is a behavioral economist at the Lee Kuan Yew Centre for Innovative Cities at Singapore University of Technology and Design, where she leads experimental research on trust in artificial intelligence. This article draws on findings from a research titled “Trust Experiments – Measuring Trust in AI as Actions.” The study uses behavioral economics experiments to uncover how individuals trust AI systems and the organizations behind them—exploring topics like fairness, personality-driven AI design, and reciprocal behavior. Her work has informed strategies for fostering trustworthy AI across both technical and governance layers.
About the Organization
The Lee Kuan Yew Centre for Innovative Cities (LKYCIC) is a research institute at the Singapore University of Technology and Design (SUTD). The Centre conducts interdisciplinary research on the future of cities, with a focus on how technology, design, and policy can be integrated to address complex urban challenges. Its work spans areas such as mobility, aging, and future of innovation. As part of SUTD, LKYCIC contributes to a university-wide mission that places design and artificial intelligence at the heart of education and innovation. SUTD’s distinctive approach blends human-centred design with cutting-edge AI research to shape technologies that are not only smart, but also deeply aligned with human needs.
The views and recommendations expressed in this article published on [month / year] are solely of the author/s and do not necessarily reflect the views and position of the Tech for Good Institute.
This article was co-written with an ensemble of Humans ∞ AI. In line with LKYCIC’s research, collaborating with AI proved especially helpful in the early stages—such as brainstorming and shaping the flow. However, finalising the piece required significant human input, involving time-intensive iterations grounded in domain expertise, agency, imagination, and discernment.