
On 26 December 2024, South Korea's National Assembly passed the AI Framework Act. This happened during a period of tremendous social upheaval, marked by President Yoon Suk Yeol's declaration and subsequent withdrawal of martial law on 3 December. This extraordinary flip-flop led to the National Assembly's impeachment motion against him on 14 December. Despite this turmoil, the AI Framework Act passed peacefully without partisan disagreement, under the banner of "cooperation for people's livelihood." Only civil society organisations (CSOs) expressed critical stances regarding the act.
Both ruling and opposition parties unanimously advocated for the interests of AI technology companies, while no lawmakers fought to protect citizens' safety and human rights from the potential risks posed by AI. Despite being a legislation with numerous contentious issues, the AI Framework Act went through only two review subcommittee meetings. The National Assembly and the government celebrated South Korea becoming the second country in the world to pass an AI legislation. South Korea's situation exemplifies the global "race to the bottom" in regulatory relaxation amid worldwide competition for AI development.
South Korea's CSOs are not opposed to the establishment of the AI Framework Act. On the contrary, they advocate for its swift enactment to protect citizens' safety and human rights from AI systems that are already being implemented without any regulation. However, this doesn't mean the AI Framework Act should be hastily formulated and pushed through.

Issues with the AI Framework Act
The industry and government argue that because AI adoption is still in its early stages, they should first focus on industrial promotion to strengthen the international competitiveness of domestic AI companies, claiming it's not too late to gradually introduce regulations on AI risks. However, various AI systems have already been implemented and are currently being used throughout Korean society, with risks and harms already occurring.
For example, how can we ensure the fairness of recruitment AI that is already widely used in public institutions and businesses? If there is bias in recruitment AI, how can job seekers recognize this and raise concerns? The police have reportedly completed the development and demonstration testing of intelligent CCTV (closed-circuit television) aimed at identification, tracking, and crime and behaviour prediction – how can we guarantee that such CCTV will not be used for citizen surveillance? Aren't AI digital textbooks collecting and monitoring too much personal information, including students' after-school learning activities? Companies and the government have a responsibility not just to avoid regulation, but to provide alternatives that can address these concerns.
But the truth is that the AI Framework Act that was passed by the National Assembly fails to address these concerns. To share some examples:
- There are no provisions regarding prohibited artificial intelligence practices.
- The scope of high-impact AI providers remains narrow, and penalties for violations of obligations are inadequate.
- While the definition of "those affected by AI" is fortunately included, there are no provisions for their rights or remedies.
- Obligations for general-purpose AI operators, such as disclosure of training data, are also not included.
- Moreover, a toxic clause was newly added, exempting AI used for defence or national security purposes from the application of this law.
New report presents civil society concerns and perspectives
Following the inauguration of US President Donald Trump's second administration and the launch of China's DeepSeek, Korean companies are demanding further deregulation while emphasizing the need to strengthen global competitiveness, and the government and National Assembly are responding favourably. While companies argue that Korea should not miss the “golden opportunity” for AI development, civil society is concerned about potentially missing the golden opportunity to establish governance for safe and trustworthy AI. Korean civil society is aware that there are scientists and CSOs worldwide sharing similar concerns. We will strive to contribute to the formation of global norms for safe and trustworthy AI by sharing Korea's situation with the rest of the world.
The Korean Progressive Network Jinbonet, in collaboration with the Institute for Digital Rights and with support from APC, has published a report, entitled Research on AI policy and issues in key areas in South Korea: Public sector, law enforcement, education and social welfare.
Additional research is needed on what types of AI systems are being introduced across Korean society and under what procedures and policies. This report serves this purpose. It analyses the standards for AI regulation, implementation status, and actual or potential problems in key areas, including public administration, law enforcement, education and social welfare. The report also covers the key contents of the AI Framework Act passed by the National Assembly, civil society's response process, and the act’s shortcomings. It highlights that the act has largely excluded key regulations that civil society has been demanding, and presents concrete recommendations that can improve it.
Byoung-il Oh is the president of Korean Progressive Network Jinbonet