Artificial intelligence (AI) is rapidly transforming the world — revolutionizing industries and reshaping the way we work and live. As AI advances, governments across Asia are grappling with the challenge of regulating this complex technology. While the concept of AI is not new, its development has been increasing at such a rapid rate that the law is playing catch-up.
This article explores the evolution of AI regulation in Asia, which is taking place in three primary ways:
-
China has enacted specific AI regulations. But these regulations are vague and could complicate compliance.
- Singapore and the ASEAN region have taken a soft, non-binding, and voluntary approach with the aim of driving AI growth and innovation. However, it is unclear if governments can quickly identify and mitigate emerging risks.
- South Korea, with its proposed AI Basic Law, aims to draw a distinction between high-impact AI applications where more guardrails may be required and low-risk areas where a more relaxed approach may make better sense. Japan and Australia, which are currently adopting a similar approach to Singapore, have discussions drawing similar distinctions between high-impact AI and low-risk areas (though the specific distinctions might differ).
Table of Contents
China: Enacting specific AI regulations
In 2017, China issued a comprehensive three-step strategy, New Generation Artificial Intelligence Development Plan, with the intent to propel China to the forefront of AI innovation.
Since then, China has enacted a series of AI-specific legislations, such as:
-
Administrative Provisions on Recommendation Algorithms in Internet-based Information Services (w.e.f. 2022), which “contain several mandatory requirements for providers of the [algorithm recommendation services]”1
-
Administrative Provisions on Deep Synthesis in Internet-based Information Services (w.e.f. since 2023), which seeks to strengthen the integrated management of the internet information services2
-
Interim Measures for the Management of Generative Artificial Intelligence Services (“GAI Measures”) (w.e.f. since 2023), which sets out the rules to regulate those who provide generative AI capabilities to the public within Mainland China
-
Scientific and Technological Ethics Review Regulation (Trial) (w.e.f. 2023), which requires entities engaging in scientific research activities in life sciences, medicine, or AI to establish an ethics committee3
However, these AI regulations themselves do not impose penalties. Instead, penalties can be incurred under existing laws, such as the cybersecurity law, the data security law, the Personal Information Protection Law (PIPL), China’s Civil Code, and criminal law.
On the face of it, China is taking a “hard law” approach, implementing regulations that outline liability provisions for violations and noncompliance. This could attract both civil and criminal penalties and even possible business cessation under existing laws.
These regulations are vague (quite unlike the approach taken by the EU AI Act) as they do not have a clear definition of AI or generative AI. This makes implementation, compliance, and enforcement difficult for both government and potentially affected organizations.
Singapore, the ASEAN region, Japan, and Australia: A soft, voluntary approach
On the other end of the spectrum, several countries are taking a more voluntary approach.
Singapore
Singapore has taken the lead in the voluntary approach space with the release of its nonbinding framework and strategy:
-
The Model AI Governance Framework in 2019 (updated in 2020) sought to provide “detailed and readily-implementable guidance to private sector organizations to address key ethical and governance issues when deploying AI solutions.”4
-
The Model AI Governance Framework for Generative AI (published in 2024) was built on the aforementioned Model AI Governance Framework and pertains to generative AI.
-
The National Artificial Intelligence Strategy 2.0 to Uplift Singapore’s Social and Economic Potential (released in 2023) “outlines [Singapore’s] ambition and commitment to building a trusted and responsible AI ecosystem, driving innovation and growth through AI, and empowering [the people of Singapore] and businesses to understand and engage with AI.”5
Further, Singapore’s AI Verify Foundation was established with the aim of “harness[ing] the collective power and contributions of the global open-source community to develop AI testing tools to enable responsible AI.”6
These are nonbinding and seek only to provide guidance. Liability for any related violations would be governed by the current existing laws, such as the Personal Data Protection Act, the Copyright Act, and the Computer Misuse Act. It remains to be seen whether the government can quickly identify and mitigate emerging risks under such a framework.
ASEAN
ASEAN released a guide on AI Governance and Ethics in February 2024, which is a nonbinding practical guide for companies in ASEAN that “focuses on encouraging alignment within ASEAN and fostering the interoperability of AI frameworks across jurisdictions.”7
It bears noting that a large section of the guide sets out examples from Singapore, suggesting a softer and more voluntary approach toward AI regulation within the region. It is not yet known if this will be adopted in other ASEAN countries.
Japan
Japan has taken a gradual approach and has relied on nonbinding guidance, such as the AI Guidelines for Business Version 1.0 (published in April 2024), which sets out “unified guiding principles in AI governance in Japan to promote the safe and secure use of AI.”8 As it is nonbinding, it requires voluntary efforts and support from the community. Liability for any related violations would be governed by the current existing laws, such as the civil code, Product Liability Act, and the penal code.
Japan also launched the Hiroshima AI Process Comprehensive Policy Framework in May 2023, which was endorsed by the other G7 countries. This Hiroshima framework sets out the “principles that should be applied to all actors across the AI lifecycle […] such as publicly reporting advanced AI systems’ capabilities and domains of inappropriate use and protecting intellectual property.”9
It bears noting that in January 2023, a draft bill Basic Law for Promoting Responsible AI was submitted with the aim of regulating developers to a certain scale.10 The draft bill also seeks to include regular reporting, violations of which may result in fines or criminal penalties.11 It does, however, seek to differentiate between conducting safety verification for AI in “high-risk areas”12 and those that are not in those areas.
If the draft AI bill is adopted, it will represent a shift from a soft, voluntary, nonbinding approach to a more “hard law” stance. It is not yet known if such a stance would result in a stricter regulation like the EU AI Act or remain vague in terms of AI definitions as per China’s AI legislations.
Australia
Like Japan, Australia has adopted a voluntary nonbinding approach. It has not enacted any specific statutes or regulations directly regulating AI. Similarly, liability for any related violations would be governed by the current existing laws, such as the Online Safety Act 2021, Privacy Act 1988, and Australian Consumer Law.
Instead, Australia published a series of guidelines and consultation papers focusing on AI:
-
The AI Ethics Principles published in 2019 sets out eight voluntary principles for responsible design and the development and implementation of AI.
-
The Australian government published its interim response in January 2024 to the June 2023 consultation conducted by the Commonwealth Department of Industry, Science and Resources: Safe and responsible AI in Australia.
In yet another similarity, the Australian government’s interim response recognizes that the “current regulatory frameworks do not fully address the risks of AI.”13 The Australian government wants the “design, development and deployment of AI in legitimate high-risk settings to be safe and reliable… [however] it aims to ensure that AI can continue being used in low-risk settings largely unimpeded.”14 The Australian government indicates that it intends to achieve this by “clarifying and strengthening laws to safeguard citizens” and “using testing, transparency and accountability measures to prevent harms from occurring in high-risk settings.”15
It is not yet known if any future AI regulations developed by the Australian government would be strict or remain vague on AI definitions, which may make them difficult to enforce.
South Korea: Focusing on high-impact AI and GenAI
South Korea’s AI law, the draft Basic Law on the Development of Artificial Intelligence and Creation and Creation of Trust Base has been passed by the South Korean National Assembly’s Legislative and Judiciary Committee. The AI Basic Law (once passed) will differentiate between “high-impact AI” (i.e., those that have a significant impact on public health, safety, and fundamental rights) and other AI applications that do not fall within this category. The AI Basic Law will mirror the EU AI Act’s risk management obligations particularly for “high-impact AI.”16
Businesses providing high-impact AI products or services would have to assess the impact on fundamental rights, and notification requirements are imposed for high-impact AI or GenAI with clear labels distinguishing AI-generated content. Foreign AI businesses meeting certain thresholds as set out in the AI Basic Law may have to appoint domestic agents in Korea to handle such compliance and reporting obligations.
It is likely that the draft AI Basic Law may be passed by the end of 2024.
Looking ahead
The rapid evolution of AI brings both unparalleled opportunities and significant challenges. While AI has the potential to revolutionize industries like healthcare, education, and public services, it also raises critical concerns, such as bias, data privacy, and the ethical implications. Striking the right balance between fostering innovation and ensuring ethical responsibility is imperative.
Collaboration among governments, software developers, industry leaders, and academic institutions is essential to developing thoughtful and effective AI regulations. Initiatives, such as regulatory sandboxes, independent algorithm audits, and the adoption of responsible design principles, can help create an environment where AI is developed and deployed safely. Such measures ensure that AI enhances human potential while mitigating risks.
In this blog post, we may have used or referred to third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use.
Elastic, Elasticsearch, ESRE, Elasticsearch Relevance Engine and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.
Leave a Reply