TMT: local firms – Focus
By

I. Introduction
Recognizing the rapidly evolving nature of AI and the scope of its potential impact, the South Korean government has been undertaking various legislative and policy initiatives to both promote the AI industry and appropriately regulate potential side effects.

Many AI bills have already been submitted to the 21st National Assembly.  Among them, the “AI Framework Act” bill, which consolidates seven AI-related bills and is being supported by the Ministry of Science and ICT (MSIT), passed the Legislative Review Subcommittee under the Science, ICT, Broadcasting, and Communications Committee of the National Assembly in February 2023.  Although progress on this proposed AI Framework Act has since stalled, discussions may resume with the inauguration of the 22nd National Assembly on May 30, 2025.

Other authorities have also been active on both legislative and policy fronts.  For instance, in March 2024, the Korea Communications Commission (KCC) also announced that enacting a separate legislation titled “Act on Protection of Artificial Intelligence Service User” is a key initiative.  While the specific details of the KCC bill are yet to be disclosed, the legislation aims to create an internal reporting system for the government with regards to user protection matters and require AI services to undergo a basic impact assessment upon release.

Until these legislative attempts are realized, AI in South Korea will be regulated by existing rules governing personal information, copyright, and telecommunications.  This backdrop presents potential conflicts between the characteristics of AI and the conventional norms which may result in a regulatory vacuum.  Additionally, each regulatory authority is reviewing the necessity and possibility of regulating new issues caused by AI.  In the following section, we discuss the challenges of AI regulation under the existing legal framework and recent developments in related discussions.

II. Copyright

Copyright affects a broad array of AI-related issues, including the use of potentially copyrighted works and/or database for AI model training and the generation of creative outputs through AI that could infringe on existing copyrights or other rights under the Copyright Act.  Courts are tasked with enforcing the Copyright Act in this unprecedented era of AI activities, but there has been no court decision specifically on AI issues or clear guidance from the authorities.

In response to these challenges, the Ministry of Culture, Sports and Tourism (MCST), along with the Korea Copyright Commission under its supervision, has been closely reviewing AI-related copyright issues by operating the “AI-Copyright Taskforce” since 2023.

A. Major Copyright Issues in AI

1. Training AI models

Under South Korea’s Copyright Act, copyright holders generally have exclusive rights to reproduce, publicly transmit, and distribute their copyrighted works.  Unauthorized use of another person’s copyrighted work risks copyright infringement.  The Act also grants “database producers” exclusive rights to reproduce, distribute, broadcast, or transmit all or a substantial part of their databases.  Violations of the Act are subject to civil and criminal liabilities.

While “fair use” exception may serve as a potential defense, its applicability in the AI industry remains uncertain.  In response, the Korean legislature has proposed amendments to the Copyright Act to introduce liability exemptions for text and data mining (TDM).  However, the bills were discarded when the 21st National Assembly term ended on May 29, 2024.  These legislative proposals may reemerge during the 22nd National Assembly’s term.

2. Creative output generated by AI

AI-generated content that is identical or similar to an existing copyrighted work may lead to copyright disputes.  Since the Copyright Act defines “copyrighted work” as “creations that express human ideas or emotions," questions arise regarding the copyrightability of AI-generated content and who among the various stakeholders in the value chain – AI service users, AI service providers, and copyright holders of training data – should hold the copyright.  Under the MCST’s current interpretation of the law, content generated solely by AI without human creative intervention cannot be protected as copyrighted work under the Copyright Act.  However, lawmakers have proposed amendments to introduce the concepts of “copyrighted AI work” and “AI copyright holder” in the Copyright Act, though the future of these amendments remains uncertain.

B. Developments in the MCST

In the “Guidelines on Copyrights for Generative AI,” the MCST outlines its basic policy direction on AI and copyright issues, incorporating discussions from its “2023 AI-Copyright Taskforce.”  While not legally binding, the Guidelines provide the agency’s perspectives on key issues and suggests the following, among others:

  1. Securing legal basis before using copyrighted works for AI training, as it is unclear whether such use would qualify as “fair use” under the Copyright Act;
  2. Applying filtering technologies to prevent generating content that is identical or similar to an existing copyrighted work; and
  3. Allowing AI-generated content to be registered as “compilation” to the extent human intervention added creativity by editing or arranging AI-generated materials.

The MCST has launched its “2024 AI-Copyright Taskforce” and plans to announce more specific policy directions by late 2024.

III. Personal Information

South Korea’s Personal Information Protection Act (PIPA) applies to the processing of personal information in relation to AI.  The PIPA is a regulatory system primarily based on the consent of data subjects, leading to various issues in the context of AI such as how personal information contained in training data should be processed.

The Personal Information Protection Commission (PIPC), the primary privacy regulator in South Korea, has been holding various discussions to determine policy directions for the practical application of the PIPA in cases where AI training data includes personal information as well as the need for further legislation or guidelines in the context of AI.

A. Major Privacy Issues in the Context of AI

1. Processing of publicly available information

Publicly available information that AI service providers collect and use for training AI models may include personal information, which raises the issues of (i) the applicable legal basis for AI service providers to collect and use such personal information, and (ii) whether such personal information should be detected/identified and removed/de-identified, and if so, to what extent.

The PIPA generally requires personal information controllers to obtain consent of data subjects to process their personal information.  The PIPA is trending towards expanding the scope of exceptions to this consent requirement, but there are still no clear standards for when personal information controllers may process publicly available personal information for training AI models.

In this regard, the PIPC plans to publish new guidelines within the first half of 2024 for AI service providers that use publicly available information, covering issues such as: (i) standards for determining the scope of the data subject’s objective intent to grant consent, (ii) standards for legitimate interests of personal information controllers, and (iii) risk mitigation measures to consider in the weighing of interests.

For reference, the PIPC recently conducted a survey of AI service providers regarding their personal information protection practices.  Through this survey, the PIPC concluded that publicly available information collected for AI model training can contain personal information such as unique identification information, bank account information, and credit card information, and recommended that AI service providers should implement stronger measures to remove such information in advance.  The PIPC also recommended that AI service providers should take measures to remove or block certain webpages (URLs) that expose Korean data subjects’ personal information during the pre-training phase.  Furthermore, the PIPC is showing continued interest in the research of technologies for directly detecting, identifying, and masking personal information such as credit card information, bank account information and other unique identification information in training data.

2. Ensuring transparency

Under the PIPA, personal information controllers must establish and publicly disclose privacy policies containing information such as the purposes of processing personal information and the period of processing and retaining personal information.  In the context of AI privacy, the discussions on transparency focus on ensuring that data subjects can clearly understand how their personal information is collected and processed for developing AI models and services, such as by disclosing the sources and methods of training data collection and minimization of possibility of identification.

On this issue, the PIPC recommended during its recent survey that AI service providers should clearly notify their users when user input data go through human review for purposes such as improving AI models.  Going forward, the PIPC is expected to publish additional guidelines on disclosing sources of training data and methods of collection, as well as on the exercise of data subject rights (including the right to access, remove, or suspend the processing of personal information).

3. Automated decisions

The amended PIPA (effective as of March 15, 2024) established a new provision related to “automated decisions” (Article 37-2) which is similar to Article 22 (Automated individual decision-making, including profiling) of the GDPR.  This provision prescribes the data subject’s (i) right to an explanation if their rights or obligations are affected by a decision made by processing their personal information using a “fully automated system,” such as a system using AI technology, and (ii) right to refuse that automated decision if it materially affects their rights or obligations.  Furthermore, the amended PIPA requires personal information controllers to make additional disclosures, including the standards and procedures for automated decisions and how personal information is processed in such decisions, in a manner that allows data subjects to easily check such information.

In relation to the foregoing, the PIPC has also taken the following actions: (i) on March 12, 2024, the PIPC released its explanatory notes on the amendment to the PIPA and the second amendment to the Enforcement Decree of the PIPA to provide guidelines on how this new provision on automated decisions will be applied; (ii) on May 17, 2024, the PIPC released its draft subordinate regulations (notification), and (iii) on May 24, 2024, the PIPC published its draft guidelines for automated decisions.

B. Developments in the PIPC

In 2023, the PIPC announced its policy directions for safe utilization of personal information in the age of AI.  These AI policy directions lay out the currently applicable standards and principles as well as the PIPC’s plans for building the detailed standards and principles for the processing of personal information in the context of AI.

The PIPC’s plans for building detailed standards and principles for the PIPA in the context of AI included guidelines for every step of AI services, such as the standards for pseudonymization of unstructured data (published earlier this year) as well as the guidelines on publicly available information, biometric information, synthetic data, mobile imaging devices, and transparency, which are planned for release by the end of 2024.

The PIPC’s upcoming guidelines are expected to have a considerable impact on data processing practices in the AI sector, and should be closely monitored.

IV. Antitrust and Competition

Under the Monopoly Regulation and Fair Trade Law (FTL), conduct such as displaying one’s own products more predominantly than others through an algorithm, collecting and using competitors’ business information or consumers��� personal data by using an algorithm, and using external content for free for training its own AI service, may constitute abuse of market dominance or unfair trade practice.

In April 2024, the Korea Fair Trade Commission (KFTC) commissioned a research to better understand the AI market and analyze potential anti-competition and consumer protection issues, primarily focusing on the generative AI market.  This study may have a significant impact on not only the policy directions of the KFTC, but also its enforcement priorities relating to AI.

A. AI Algorithms and Self-Preferencing

The KFTC has determined that, under the FTL, displaying one’s own products or services more predominantly than those of others (e.g., third-party sellers) through an algorithm is “self-preferencing,” which may constitute unfair discrimination or unfair customer solicitation.

For example, in October 2020, the KFTC imposed an administrative fine on an online shopping platform company for applying a search result algorithm favoring the sellers on its platform that use the company’s online shopping mall solution.  More recently, in February 2023, the KFTC sanctioned a mobility platform company with an administrative fine for applying an algorithm that preferentially allocate more calls from passengers to taxis that were operating under its franchise.  Consistent with its enforcement actions, in January 2023, the KFTC released its “Guidelines for Review of Abuse of Market Dominance by Online Platform Operators,” which specifies self-preferencing as one of the main types of anticompetitive practices of online platform operators.

As the KFTC is expected to continue to scrutinize self-preferencing practices, AI service providers should ensure that algorithms are designed to serve their original purpose and that there are justifiable reasons for parameter weights.

B. Collection and Use of Data

The KFTC has been closely monitoring market trends in relation to service providers’ collection and use of competitors’ information and whether it could lead to anticompetitive effects as a result of data concentration.

As for the collection and use of consumers’ personal information, the KFTC is keeping an eye on potential exploitative or exclusionary effects that may arise from (i) requiring users to give consent to the comprehensive use of their personal information as a condition for signing up for a service, and/or (ii) deceptive and excessive marketing based on behavioral data.

In addition, following the release of its “Guidelines for Review of the Abuse of Market Dominance by Online Platform Operators” in 2023, the KFTC is also actively reviewing potential abuse of market dominance by online platform operators, which may include use of third-party content or data for AI model training without reasonable compensation, and unfair lock-in effects and market entry barriers due to a lack of data portability and interoperability.

As the importance of data for developing and using AI continues to grow, the KFTC’s scrutiny of potential anticompetitive and unfair data practices is likely to increase.  Accordingly, service providers should carefully review the purpose, scope, and method of collecting and using data.

V. Labor and Employment

The question of whether “platform workers” qualify as “employees” has recently emerged as an important issue globally.  While South Korea currently has no clear regulations or standards for determining the employee status of platform workers whose job duties are assigned by AI, disputes are increasing, with courts individually ruling on such cases.

A notable case involving a Korean mobility platform company called into question whether drivers whose tasks were assigned by a big data-based AI qualify as “employees” under the Labor Standard Act entitled to certain statutory protections, as opposed to being considered individual contractors.  The lower court ruled that the drivers are not employees based on the following: (i) the details of the drivers’ work were determined by the users’ calls, and not by the employer’s instructions; (ii) the drivers could decide whether to accept a ride; (iii) the drivers were not subject to the employer’s employment rules or service regulations; (iv) the driver’s working hours or locations were not specifically determined; and (v) the drivers did not execute contracts directly with the employer but with its business partners of the employer.

The appellate court, however, acknowledged the drivers’ employee status on the grounds that they could not independently determine the details of their job duties or conditions because: (i) the service operator (the employer) specifically directed and supervised the drivers’ work performance and attendance; (ii) the service operator had the ultimate authority to designate the working hours or locations of the drivers; and (iii) the drivers’ work for the service operator was continuous and exclusive.  This case is currently pending at the Supreme Court, and its outcome will likely have an impact on the principles for determining the employee status of workers whose tasks are assigned by AI.

VI. Governance

Like the EU AI Act, AI-related bills that have been proposed to South Korea’s legislature or are being discussed by the Korean government agencies are taking a “risk-based approach” that classifies AI systems into different levels of risk and imposes obligations proportionate to their risk levels.

A. Scope of High-Risk AI

Similar to the EU AI Act, most of the AI bills in South Korea propose designating certain areas as high-risk AI based on the principle that “AI utilized in areas that have significant impact on people’s lives, bodies, and protection of fundamental rights” should be considered high-risk.  While the details of the bills vary, high-risk areas generally include: (i) AI related to human lives and health, (ii) AI related to the management and operation of major social infrastructures including energy, water, and electricity, (iii) AI for judgment or evaluation purposes and thereby has a significant impact on the individual’s rights and obligations, such as recruitment, credit rating, and screening of loan applications, and (iv) AI related to biometric identification.

Furthermore, the scope of “high-risk” under some AI-related bills in South Korea may be broader than that under the EU AI Act, as the bills (a) allow for the expansion of what is deemed as “high-risk” through subordinate regulations, and (b) lack exceptions to the defined categories of “high-risk.”

B. Obligations of High-Risk AI Service Providers

The bills aim to impose specific obligations on service providers who provide any products or services using high-risk AI.  While the details vary, most include the following:

  • High-risk AI service providers must inform users in advance that the products or services are based on high-risk AI.
  • Anyone intending to develop, utilize, or provide high-risk AI products or services must review whether the AI falls within the scope of high-risk AI, and if necessary, request confirmation from the MSIT.
  • Anyone intending to develop or utilize high-risk AI must take actions to ensure the reliability and safety of AI, including by establishing and operating risk management measures, drafting and storing relevant documents, and providing an explanation of the data used for training.

Meanwhile, the move toward establishing regulations on generative AI in Korea is gaining momentum.  For example, there are discussions about imposing a notification obligation on those providing products or services using generative AI, similar to the requirements for high-risk AI service providers.

Obligations of service providers that use or intend to use AI or provide related products or services may change significantly depending on how the scope of high-risk AI is determined and what obligations are imposed on high-risk AI.  Some of the bills may also require service providers to confirm whether their AI services fall within the scope of high-risk AI and, if necessary, to receive confirmation from the MSIT.  Therefore, AI service providers are advised to stay informed of South Korea’s legislative developments so that they can respond effectively to the changing regulatory environment.

VII. Conclusion

As South Korea navigates the complexities of AI regulation, the government is making continuous efforts to create a robust framework that both fosters innovation and mitigates potential risks.  The ongoing legislative and policy initiatives reflect a proactive approach to addressing the multifaceted challenges posed by AI, including those related to copyright, personal information, antitrust, labor, and governance.

AI developers and service providers should stay informed about these developments to adapt to the evolving regulatory landscape effectively.  By understanding the scope and implications of proposed regulations, particularly those concerning high-risk AI, stakeholders can ensure compliance and maintain competitive advantage.  The dynamic nature of AI legislation in South Korea underscores the importance of vigilance and adaptability in navigating this rapidly evolving field.