Resource Center / Resource Articles / Global AI Governance Law and Policy: Singapore

 

Global AI Governance Law and Policy: Singapore

This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in Singapore. The full series can be accessed here.


Published: July 2025


Contributors:


Navigate by Topic

Singapore has solidified its position as a global leader in AI governance, balancing innovation with ethical considerations through a flexible, principles-based approach. Tortoise Media’s Global AI Index continues to rank Singapore in third place, below the U.S. and China, in respect of level of investment, innovation and implementation of AI. The publication describes Singapore as "Asia’s most dynamic AI hub after China" and having "made big advancements on absolute metrics, especially on AI research and development."


History and context

AI governance in Singapore began with three initiatives in 2018. First, an Advisory Council on the Ethical Use of AI and Data was appointed. This led to the publication of a discussion paper on the responsible development and deployment of AI. Finally, the government supported the launch of a research program on the governance of AI and data use.

In 2019, the Singapore government published its first National AI Strategy, outlining plans to drive AI innovation and adoption across the economy to generate significant social and economic impact. The Smart Nation and Digital Government Group, which operates within the prime minister’s office and is administered by the Ministry of Digital Development and Information, designed and oversaw this strategy. The NAIS represented a whole-of-government approach to advance and oversee AI development and governance.

An updated strategy, NAIS 2.0, was launched in December 2023 to address recent challenges and uplift Singapore's economic and social potential over the next three to five years. NAIS 2.0 seeks to achieve two goals: develop areas of excellence in AI that advance innovation and maximize economic impact, and empower individuals, businesses and communities to use AI with confidence, discernment and trust.

This updated strategy emphasizes that the government will support experimentation and innovation while ensuring AI is developed and used responsibly and lawfully within existing safeguards.

Action 13 is a crucial element of NAIS 2.0; it requires the government to regularly review and update its frameworks to consider updates to broader standards and laws to support effective use of AI and reflect emerging principles, concerns, and technological advancements. This demonstrates Singapore’s style of governance – agile and adaptive.

The following key events illustrate Singapore’s AI governance journey over the years:

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Approach to regulation

Rather than rush into legislation, Singapore has deliberately taken an incremental approach to AI governance. The government actively promotes AI adoption as a driver of national growth by publishing flexible frameworks that are regularly reviewed and updated. The national administration adopts a light-touch approach grounded in clear principles for responsible development and use and encourages voluntary compliance.

Singapore sees AI governance as a shared responsibility between government, industrial and research institutions. The government acts as an enabler and facilitator. Industry is empowered to apply governance principles using a risk-based approach.

The AI Verify framework maps to standards around the world, like ISO/IEC 42001:2023, the U.S. National Institute of Standards and Technology AI Risk Management Framework, and Hiroshima Process International Code of Conduct — this shows that companies operating in Singapore can reuse their compliance efforts globally and, according to ISO/IEC JTC1 / SC42 Artificial Intelligence Chairman Wael William Diab, "demonstrates Singapore’s strong support in advancing global harmonisation in a practical way."

Risk-based comprehensive legislation and policy

Singapore has not enacted a comprehensive AI-specific law but has developed governance frameworks that organizations can voluntarily adopt instead. There are also sector-specific regulations that apply to AI systems in particular industries and numerous other laws of relevance and application to various elements of the AI governance lifecycle.

The Model Framework for Generative AI emphasizes risk-based assessments and management throughout the AI lifecycle, from development to monitoring. Organizations are encouraged to tailor the governance measures to address the specific risks they encounter.

Sector-specific legislation

  • expand_more

  • expand_more

    The health care sector does not have any laws related to AI. The Ministry of Health published the AI in Healthcare Guidelines in 2021 to enhance patient safety and foster trust in AI technologies by sharing best practices with AI developers and deployers. These guidelines were co-developed with the Health Sciences Authority and Synapxe; it complements the HSA's Regulatory Guidelines for Software Medical Devices.

  • expand_more

Foundation or general-purpose models 

Singapore does not have laws that specifically regulate foundation models or general-purpose AI models. Instead, the national approach is based on voluntary guidelines, such as the Model Framework for Generative AI, and technical toolkits that apply to a broad range of AI systems, including foundation and generative AI models.

Agentic AI

Singapore does not have laws that specifically regulate agentic AI and doesn't have a legal definition for the term. Agentic AI would fall under the scope of Singapore’s voluntary guidelines and technical toolkits.

In 2025, the Government Technology Agency of Singapore’s AI Practice Group published the Agentic AI Primer to provide a clear framework for how AI agents could autonomously pursue objectives and execute tasks within various domains, especially in the public sector.

Enforcement

There aren't any AI-specific laws or AI enforcement agencies in Singapore. Enforcement is limited to existing laws, such as those governing data protection, cybersecurity, copyright and online safety. These laws are enforced by the pertinent regulators or authorities.


Wider regulatory environment

Numerous legal frameworks are applicable to various elements of the AI governance lifecycle.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


International cooperation on AI

As part of Singapore's goal to establish itself as an international partner for AI innovation and governance, the government wants to continue building international networks with key partner countries and leading AI companies.

So far, half of Singapore's digital economy agreements contain AI modules that promote the adoption of ethical AI governance frameworks and, where appropriate, the alignment of governance and regulatory frameworks. The intention is to establish a shared set of AI governance and ethics principles with international partners so organizational compliance is more straightforward and easily achievable.

The U.S. National Institute of Standards and Technology and IMDA published a crosswalk in October 2023, mapping the NIST's AI Risk Management Framework 1.0 to AI Verify. IMDA said this joint effort signaled both parties' common goal of balancing AI innovation and maximizing the benefits of AI technology while mitigating technology risks.

In November 2023, Singapore's prime minister participated in the AI Safety Summit. The country joined 27 others in signing the Bletchley Declaration, agreeing to work together to prevent "catastrophic harm, either deliberate or unintentional," which may arise from AI computer models and engines.

Singapore released the Association of Southeast Asian Nations Guide on AI Governance and Ethics during the fourth ASEAN Digital Ministers’ meeting in February 2024.

In May 2024, Singapore — through the AI Verify Foundation — published a Memorandum of Intent in collaboration with MLCommons, a leading AI engineering consortium recognized by the U.S. NIST. The safety initiative developed a common set of benchmarks, tools and testing approaches for generative AI models.

While Singapore is not a member of the Organisation for Economic Co-operation and Development, representatives were invited to take part in its Expert Group on AI. Singapore is a founding member of the Global Partnership on AI, an international initiative to promote responsible use of AI that respects human rights and democratic values.


Latest developments

Despite being only halfway through 2025, Singapore has introduced a significant number of projects and programs.

At the AI Action Summit in February, Singapore’s Minister for Digital Development and Information, Josephine Teo, announced three new AI governance initiatives to enhance the safety of AI for both Singapore and the global community.

First, IMDA and the AI Verify Foundation launched the Global AI Assurance Pilot, which helps codify emerging norms and best practices around the technical testing of generative AI applications.

Following a landmark collaboration between Singapore and Japan, the International Network of AI Safety Institutes released a joint testing report that evaluated the safety of LLMs across diverse linguistic and cultural environments. The primary objective was to assess whether the safeguards built into LLMs, such as protections against generating harmful, illegal, or biased content, were effective in non-English settings.

As co-lead of the testing and evaluation track under the AISI network, Singapore brought together global linguistic and technical experts from the AISI network to conduct tests across ten languages: Cantonese, English, Farsi, French, Japanese, Kiswahili, Korean, Malay, Mandarin Chinese and Telugu. The experts also conducted tests across five harm categories: violent crime, non-violent crime, IP, privacy and jailbreaking.

Following the 2024 multicultural and multilingual AI safety red teaming exercise, the IMDA published the Singapore AI Safety Red Teaming Challenge Evaluation Report 2025, which sets out a consistent methodology for testing AI safeguards across diverse languages and cultures.

In April 2025, Singapore hosted the SCAI: International Scientific Exchange on AI Safety, gathering more than 100 minds from academia, industry, and government to set priorities for shaping reliable, secure, and safe AI. The follow-up report, "The Singapore Consensus on Global AI Safety Research Priorities," was published in May 2025; the report aims to advance impactful research and development efforts to rapidly develop safety and evaluation mechanisms, and foster a trusted ecosystem where AI is harnessed for the public good.

Also in May, the IMDA launched an upgraded version of its LLM Multimodal Empathetic Reasoning and Learning in One Network, as well as the MERaLION Consortium, a collaborative platform that brings together local and global industry players, research and development institutions and leading technology companies, to develop practical AI applications such as multilingual customer support, health and emotional insight detection and agentic decision-making systems.

The IMDA launched a four-week public consultation in May on the "Starter Kit for Safety Testing of LLM-Based Applications." The kit is a set of voluntary guidelines on how to think about and conduct testing for common risks in apps. It sets out a structured approach to pre-deployment testing from app output to app components and contains recommended tests for the four key risks commonly encountered in apps today – hallucination, undesirable content, data disclosure and vulnerability to adversarial prompt.


Full series overview

Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.

Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.

Global AI Governance Law and Policy

Jurisdiction Overviews 2025

The overview page for this series can be accessed here.

  • Australia
  • Canada
  • China
  • European Union
  • India
  • Japan
  • Singapore
  • South Korea
  • United Arab Emirates
  • United Kingdom
  • United States

Additional resources

  • expand_more



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs