Resource Center / Resource Articles / Global AI Governance Law and Policy: Singapore
Global AI Governance Law and Policy: Singapore
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in Singapore. The full series can be accessed here.
Published: July 2025
Contributors:
Navigate by Topic
Singapore has solidified its position as a global leader in AI governance, balancing innovation with ethical considerations through a flexible, principles-based approach. Tortoise Media’s Global AI Index continues to rank Singapore in third place, below the U.S. and China, in respect of level of investment, innovation and implementation of AI. The publication describes Singapore as "Asia’s most dynamic AI hub after China" and having "made big advancements on absolute metrics, especially on AI research and development."
AI governance in Singapore began with three initiatives in 2018. First, an Advisory Council on the Ethical Use of AI and Data was appointed. This led to the publication of a discussion paper on the responsible development and deployment of AI. Finally, the government supported the launch of a research program on the governance of AI and data use.
In 2019, the Singapore government published its first National AI Strategy, outlining plans to drive AI innovation and adoption across the economy to generate significant social and economic impact. The Smart Nation and Digital Government Group, which operates within the prime minister’s office and is administered by the Ministry of Digital Development and Information, designed and oversaw this strategy. The NAIS represented a whole-of-government approach to advance and oversee AI development and governance.
An updated strategy, NAIS 2.0, was launched in December 2023 to address recent challenges and uplift Singapore's economic and social potential over the next three to five years. NAIS 2.0 seeks to achieve two goals: develop areas of excellence in AI that advance innovation and maximize economic impact, and empower individuals, businesses and communities to use AI with confidence, discernment and trust.
This updated strategy emphasizes that the government will support experimentation and innovation while ensuring AI is developed and used responsibly and lawfully within existing safeguards.
Action 13 is a crucial element of NAIS 2.0; it requires the government to regularly review and update its frameworks to consider updates to broader standards and laws to support effective use of AI and reflect emerging principles, concerns, and technological advancements. This demonstrates Singapore’s style of governance – agile and adaptive.
The following key events illustrate Singapore’s AI governance journey over the years:
-
expand_more
2018
- The Advisory Council on the Ethical Use of AI and Data was appointed; a discussion paper on the responsible development and deployment of AI was published; and a research program on the governance of AI and data use was launched.
-
expand_more
2019
- NAIS was published and the Model AI Governance Framework was launched. The framework aimed to provide private sector organizations with readily implementable guidance on key ethical and governance issues when deploying AI solutions.
-
expand_more
2020
- The updated Model Framework was released, containing additional considerations and refining the original Framework for greater relevance and usability; the World Economic Forum's Implementation and Self-Assessment Guide for Organizations was published, which aimed to help organizations assess the alignment of their AI governance practices with the Model Framework; and a Compendium of Use Cases was released, which illustrated how organizations implemented accountable AI governance practices and aligned AI governance practices with the Model Framework.
-
expand_more
2022
- The AI Verify testing framework was launched. Positioned as the world’s first AI testing toolkit, it enables companies to demonstrate accountability and responsible AI practices through testing and validation.
-
expand_more
2023
- NAIS 2.0 was released. The Infocomm Media Development Authority launched the AI Verify Foundation to support the development and use of AI Verify. The inaugural Singapore Conference on AI explored potential challenges that could limit society’s capacity to leverage AI to benefit people and communities.
-
expand_more
2024
- AI Verify and IMDA launched the Model AI Governance Framework for Generative AI and extended the 2020 Model Framework to cover nine trust dimensions from accountability to "AI for Public Good." The IMDA and Rwanda’s Ministry of Information Communication Technology and Innovation launched the AI Playbook for Small States.
- Project Moonshot, one of the world’s first large language model evaluation toolkits, was rolled out. It is an open-source tool that brings together benchmarking and red teaming within a single platform.
- The GenAI Sandbox launched, a tool that enables local small and medium-sized enterprises greater access to generative AI.
- The inaugural Singapore AI Safety Red Teaming Challenge was held with eight other Asia Pacific countries.
Rather than rush into legislation, Singapore has deliberately taken an incremental approach to AI governance. The government actively promotes AI adoption as a driver of national growth by publishing flexible frameworks that are regularly reviewed and updated. The national administration adopts a light-touch approach grounded in clear principles for responsible development and use and encourages voluntary compliance.
Singapore sees AI governance as a shared responsibility between government, industrial and research institutions. The government acts as an enabler and facilitator. Industry is empowered to apply governance principles using a risk-based approach.
The AI Verify framework maps to standards around the world, like ISO/IEC 42001:2023, the U.S. National Institute of Standards and Technology AI Risk Management Framework, and Hiroshima Process International Code of Conduct — this shows that companies operating in Singapore can reuse their compliance efforts globally and, according to ISO/IEC JTC1 / SC42 Artificial Intelligence Chairman Wael William Diab, "demonstrates Singapore’s strong support in advancing global harmonisation in a practical way."
Risk-based comprehensive legislation and policy
Singapore has not enacted a comprehensive AI-specific law but has developed governance frameworks that organizations can voluntarily adopt instead. There are also sector-specific regulations that apply to AI systems in particular industries and numerous other laws of relevance and application to various elements of the AI governance lifecycle.
The Model Framework for Generative AI emphasizes risk-based assessments and management throughout the AI lifecycle, from development to monitoring. Organizations are encouraged to tailor the governance measures to address the specific risks they encounter.
Sector-specific legislation
-
expand_more
Financial services
The Monetary Authority of Singapore, the nation's central bank and integrated financial regulator, was the first sectoral authority to implement AI governance regulation. In 2018, the FEAT principles — a set of principles focused on fairness, ethics, accountability and transparency — was created by the MAS and financial industry to support responsible AI use.
To operationalize these principles, the MAS worked with industry partners to create the Veritas framework in 2019. Veritas enables financial institutions to evaluate their AI and data-analytic-driven solutions against the FEAT principles. In 2023, the framework was updated and the Veritas Toolkit version 2.0 was released.
As a part of Singapore's NAIS, the FEAT principles and Veritas aim to foster a progressive and trusted environment for AI adoption in the financial sector.
The announcement of Project Mindforge was another significant development in 2023. The project was a collaboration between the MAS and key partners from the banking, insurance and capital market sectors with two objectives: to develop a clear and concise framework for the responsible use of generative AI in finance, and to drive innovation to solve common industry-wide challenges and enhance risk management using generative AI.
In early 2024, a whitepaper detailing the generative AI risk framework was published, identifying seven risk dimensions in the areas of accountability and governance, monitoring and stability, transparency and explainability, fairness and bias, legal and regulatory, ethics and impact, and cyber and data security.
After reviewing how banks manage AI model risk, the MAS published an information paper that outlined best practices. The MAS encouraged all financial institutions to reference these practices when developing and deploying AI.
-
expand_more
Health care
The health care sector does not have any laws related to AI. The Ministry of Health published the AI in Healthcare Guidelines in 2021 to enhance patient safety and foster trust in AI technologies by sharing best practices with AI developers and deployers. These guidelines were co-developed with the Health Sciences Authority and Synapxe; it complements the HSA's Regulatory Guidelines for Software Medical Devices.
-
expand_more
Legal
In 2024, the Supreme Court released a Guide on the Use of Generative AI Tools by Court Users, setting out general principles and guidance in relation to the use of generative AI tools in legal proceedings.
The Ministry of Law announced in 2025 that it was developing guidelines to help legal professionals be "smart buyers and users of generative AI tools."
Foundation or general-purpose models
Singapore does not have laws that specifically regulate foundation models or general-purpose AI models. Instead, the national approach is based on voluntary guidelines, such as the Model Framework for Generative AI, and technical toolkits that apply to a broad range of AI systems, including foundation and generative AI models.
Agentic AI
Singapore does not have laws that specifically regulate agentic AI and doesn't have a legal definition for the term. Agentic AI would fall under the scope of Singapore’s voluntary guidelines and technical toolkits.
In 2025, the Government Technology Agency of Singapore’s AI Practice Group published the Agentic AI Primer to provide a clear framework for how AI agents could autonomously pursue objectives and execute tasks within various domains, especially in the public sector.
Enforcement
There aren't any AI-specific laws or AI enforcement agencies in Singapore. Enforcement is limited to existing laws, such as those governing data protection, cybersecurity, copyright and online safety. These laws are enforced by the pertinent regulators or authorities.
Numerous legal frameworks are applicable to various elements of the AI governance lifecycle.
-
expand_more
Data protection
The Personal Data Protection Act governs the collection, use, disclosure and care of personal data. It provides a baseline standard for the protection of personal data and complements sector-specific regulatory provisions, such as those found in the Banking Act and Insurance Act.
In 2024, the Personal Data Protection Commission issued the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems. The guidelines clarify how the PDPC will interpret the PDPA. While not legally binding, organizations regard the guidelines as standards that should be followed.
Also in 2024, the PDPC launched a Proposed Guide on Synthetic Data Generation to help organizations understand synthetic data generation techniques and potential use cases, particularly for AI.
-
expand_more
Cybersecurity
In 2024, the Cyber Security Agency of Singapore released guidelines and a Companion Guide on Security AI Systems to help system owners secure AI throughout its lifecycle. The CSA emphasized that the Companion Guide was not prescriptive, but a community-driven resource that curated practical measures, security controls and best practices from industry and academic partners to complement the guideline document.
-
expand_more
Copyright
The Copyright Act protects original expressions of ideas in tangible form. Like the U.S. and unlike the U.K., the work must also have been created by an identifiable human author.
Copyright infringement occurs when a copyright owner's rights are violated, which can occur when copyrighted work is copied, distributed, performed or displayed without the copyright owner’s permission. There are two exceptions under Singapore’s copyright law: the computational data analysis exception per Sections 243-244 and the fair use exception per Sections 190-191.
The Intellectual Property Office of Singapore has confirmed the computational data analysis exception permits the use of copyrighted material for sentiment analysis, text and data mining, and machine learning, subject to specified conditions. The fair-use exception permits the use of non-substantial parts of copyrighted work for non-profit, educational purposes.
Willful copyright infringement is a criminal offense when it is significant and carried out for commercial gain.
-
expand_more
Online safety
In 2022, the Online Safety (Miscellaneous Amendments) Act was passed to enhance online safety for users. Providers of "online communication services," i.e., electronic services that allow Singapore users to access or communicate content via the internet with significant reach or impact, must comply with the Codes of Practice issued by IMDA. So far, this new rule only covers social media services; IMDA has issued one Code of Practice for online safety. The amendments also empower IMDA to issue directives to address specified "egregious content" accessible to users in Singapore.
The Online Criminal Harms Act, passed in 2023, enables the government to more effectively address online activities of a criminal nature.
Established by the Agency for Science, Technology and Research in April 2023, the Centre for Advanced Technologies in Online Safety conducts research on technological capabilities to prevent digital harm. The organization focuses on detecting deepfakes and misinformation, applying watermarks, tracing the origin of digital content, and helping vulnerable groups verify online information.
-
expand_more
Anti-discrimination
In 2024, the Ministry of Manpower clarified that regardless of the technological tools used to aid employment decisions, all employers in Singapore must comply with the Tripartite Guidelines on Fair Employment Practices.
The guidelines outline principles of fair employment practices, such as hiring on the basis of merit — including skills, experience or ability to perform the job — regardless of age, race, gender, religion, disability or marital status and family responsibilities.
The ministry said that if certain AI use results in discriminatory employment practices, workers or job applicants could approach the Tripartite Alliance for Fair and Progressive Employment Practices for assistance. As of 13 November 2024, the alliance had not received any complaints of discrimination arising from the use of AI tools.
International cooperation on AI
As part of Singapore's goal to establish itself as an international partner for AI innovation and governance, the government wants to continue building international networks with key partner countries and leading AI companies.
So far, half of Singapore's digital economy agreements contain AI modules that promote the adoption of ethical AI governance frameworks and, where appropriate, the alignment of governance and regulatory frameworks. The intention is to establish a shared set of AI governance and ethics principles with international partners so organizational compliance is more straightforward and easily achievable.
The U.S. National Institute of Standards and Technology and IMDA published a crosswalk in October 2023, mapping the NIST's AI Risk Management Framework 1.0 to AI Verify. IMDA said this joint effort signaled both parties' common goal of balancing AI innovation and maximizing the benefits of AI technology while mitigating technology risks.
In November 2023, Singapore's prime minister participated in the AI Safety Summit. The country joined 27 others in signing the Bletchley Declaration, agreeing to work together to prevent "catastrophic harm, either deliberate or unintentional," which may arise from AI computer models and engines.
Singapore released the Association of Southeast Asian Nations Guide on AI Governance and Ethics during the fourth ASEAN Digital Ministers’ meeting in February 2024.
In May 2024, Singapore — through the AI Verify Foundation — published a Memorandum of Intent in collaboration with MLCommons, a leading AI engineering consortium recognized by the U.S. NIST. The safety initiative developed a common set of benchmarks, tools and testing approaches for generative AI models.
While Singapore is not a member of the Organisation for Economic Co-operation and Development, representatives were invited to take part in its Expert Group on AI. Singapore is a founding member of the Global Partnership on AI, an international initiative to promote responsible use of AI that respects human rights and democratic values.
Despite being only halfway through 2025, Singapore has introduced a significant number of projects and programs.
At the AI Action Summit in February, Singapore’s Minister for Digital Development and Information, Josephine Teo, announced three new AI governance initiatives to enhance the safety of AI for both Singapore and the global community.
First, IMDA and the AI Verify Foundation launched the Global AI Assurance Pilot, which helps codify emerging norms and best practices around the technical testing of generative AI applications.
Following a landmark collaboration between Singapore and Japan, the International Network of AI Safety Institutes released a joint testing report that evaluated the safety of LLMs across diverse linguistic and cultural environments. The primary objective was to assess whether the safeguards built into LLMs, such as protections against generating harmful, illegal, or biased content, were effective in non-English settings.
As co-lead of the testing and evaluation track under the AISI network, Singapore brought together global linguistic and technical experts from the AISI network to conduct tests across ten languages: Cantonese, English, Farsi, French, Japanese, Kiswahili, Korean, Malay, Mandarin Chinese and Telugu. The experts also conducted tests across five harm categories: violent crime, non-violent crime, IP, privacy and jailbreaking.
Following the 2024 multicultural and multilingual AI safety red teaming exercise, the IMDA published the Singapore AI Safety Red Teaming Challenge Evaluation Report 2025, which sets out a consistent methodology for testing AI safeguards across diverse languages and cultures.
In April 2025, Singapore hosted the SCAI: International Scientific Exchange on AI Safety, gathering more than 100 minds from academia, industry, and government to set priorities for shaping reliable, secure, and safe AI. The follow-up report, "The Singapore Consensus on Global AI Safety Research Priorities," was published in May 2025; the report aims to advance impactful research and development efforts to rapidly develop safety and evaluation mechanisms, and foster a trusted ecosystem where AI is harnessed for the public good.
Also in May, the IMDA launched an upgraded version of its LLM Multimodal Empathetic Reasoning and Learning in One Network, as well as the MERaLION Consortium, a collaborative platform that brings together local and global industry players, research and development institutions and leading technology companies, to develop practical AI applications such as multilingual customer support, health and emotional insight detection and agentic decision-making systems.
The IMDA launched a four-week public consultation in May on the "Starter Kit for Safety Testing of LLM-Based Applications." The kit is a set of voluntary guidelines on how to think about and conduct testing for common risks in apps. It sets out a structured approach to pre-deployment testing from app output to app components and contains recommended tests for the four key risks commonly encountered in apps today – hallucination, undesirable content, data disclosure and vulnerability to adversarial prompt.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
-
expand_more
AI governance resources