
Hello. I am Shin Jun-seon, a lawyer at Cheongchul Law Firm.
Generative artificial intelligence (AI) has become an indispensable element of business innovation for companies. AI is being actively implemented in various fields such as business efficiency, service innovation, and new business development, and its influence is growing day by day.
However, as AI develops, it inevitably processes large amounts of personal data, and the legal risks that arise in this process become a core challenge that corporate executives must address. In fact, the Personal Information Protection Commission announced the "Guidelines for Processing Personal Information for the Development and Utilization of Generative AI" in August 2025, providing specific guidelines to be followed throughout all stages of AI development and operation. These guidelines reflect the interpretation of laws and actual investigation cases for personal data risk management in the AI field, as well as regulatory sandbox approvals and overseas regulatory trends, allowing executives to gain practical insights.
Today, based on these guidelines, I will summarize 5 Points of Personal Information Risk Management at Each Stage of Generative AI Development and Utilization that CEOs and key decision-makers should be aware of.
Setting Objectives – Ensuring Legal Justification through Clear Objectives
The first thing to confirm when promoting an AI project is to set a specific objective that you aim to achieve through generative AI. If the objective is vague, the scope of personal data collection and use will expand indefinitely in later stages, making it difficult to control the legal risks that arise in the process. Therefore, when setting objectives, you must clarify ① what personal data ② is to be processed for what purpose.
The data used for pre-training or fine-tuning generative AI may include not only direct identification information such as names and contact information but also indirect identification information such as logs and location data. At this time, the processing purpose should not be expressed broadly as "service improvement" or "future analysis," but should be presented in specific and clear terms such as "enhancing user-specific recommendation models" or "improving chatbot consultation quality".
Once the objective has been established, the legal basis for that purpose must also be secured. When utilizing publicly available personal data, the 'legitimate interests' requirement (necessity, legitimacy, interest balancing) must be met, and for existing customer data, the necessity for subject consent or contract fulfillment can be used as a basis. Most importantly, internal management systems must be operated to ensure that the scope of collection, storage, and use does not exceed the initially set purpose.
Strategy Development Stage – Integrating Privacy Protection into Design
AI projects require a specific strategy on how to achieve clear objectives, and they do not end with simply setting objectives. Especially when dealing with personal information, it is essential to integrate privacy protection into design (PbD, Privacy by Design) from the early stages of development.
Companies are required to have a procedure to assess risks in advance through Personal Information Impact Assessments (PIA). By evaluating and supplementing the sensitivity of the data used for AI learning, the expected scale of processing, and the possibility of overseas transfer in advance, companies can avoid the criticism of not having adequately reviewed issues when problems arise later. The results of the PIA can help determine which data should be replaced with pseudonymization and which data should be avoided completely.
Furthermore, the strategy may differ depending on how companies adopt AI, requiring thorough review of data source and output control when using existing LLMs, contractual liability provisions related to personal information processing when using service APIs, and data governance and safety measures when developing in-house.
AI Learning and Development Stage – More Data, More Risks
In the AI learning phase, ensuring the legality of data and safety measures is key. Collecting publicly available personal information indiscriminately can infringe rights, so the legitimate interests requirement (necessity, legitimacy, interest balancing) must be met, and the additional use of existing data should be evaluated for its relevance to the original purpose.
Additionally, the training data may mix false or malicious information, posing a risk of data contamination that distorts model performance, so source verification and filtering procedures should be established at the collection stage. Furthermore, it is important to apply technical protective measures such as pseudonymization, encryption, and access control to prepare for leakage or misuse, and these safety measures serve as a basic premise for reducing legal risks and securing corporate trust.
System Application and Management Stage – Ensuring Operational Transparency
In the stage of applying AI systems to actual services, transparency and rights protection become the top priorities. Companies should specify through their personal information handling policies how AI utilizes personal information and establish procedures for users to request access, correction, deletion, and processing stops.
Additionally, an Acceptable Use Policy (AUP) should be established to prevent misuse of services, and personal information breach reporting channels should be always operational. Moreover, even after distribution, regular risk checks should be conducted, and the model's prediction errors or potential exposure of personal information should be continuously managed.
AI Privacy Governance Establishment Stage – Establishing an Enterprise-Wide Management System
AI personal information protection should not be the responsibility of specific departments alone but should be operated through an enterprise-wide management system. The Chief Privacy Officer (CPO) should work closely with the development, security, and legal departments, and institutionalize regular risk checks, internal audits, and post-reporting procedures.
The Chief Privacy Officer (CPO) should not only ensure independent authority but also maintain a close collaborative relationship with the Chief AI Officer (CAIO) and the Chief Information Security Officer (CISO). In particular, the CPO should intervene in the early stages of AI planning and development to gather adequate information regarding personal information processing and provide timely feedback to relevant departments, ensuring that privacy by design (PbD) is integrated throughout the entire process of service delivery. Furthermore, it is important to establish a regular risk check, internal audit, and documentation system.
Conclusion
Generative AI offers immense opportunities for companies, but neglecting legal requirements related to personal data processing can result in devastating risks. The guidelines issued by the Personal Information Protection Commission are not simply recommendations, but minimum standards that companies seeking to utilize generative AI in their business must follow. When executives systematically manage risk management points at each stage, AI can function as a true asset for business growth.
Please check through the checklist below whether your company's personal information management system meets the stage-specific requirements.
AI Personal Information Risk Management Checklist (For CEOs and Executives)
Stage
Check Questions
1. Clarification of Objectives
• Is the purpose of collecting and using personal information specifically set?
• Are the personal information items collected the minimum necessary to achieve the purpose?
• Is there internal control to prevent additional use beyond the initial purpose?
• Are objectives and scopes documented and shared internally?
2. Securing Legal Grounds
• Have you confirmed the legal grounds for each data source (public, existing, third-party provision)?
• Have you clearly distinguished between consent, legitimate interests, and contractual obligation?
• Did you conduct a Personal Information Impact Assessment (PIA) in advance?
• Did you reflect PbD (Privacy by Design) in the design stage?
• Have you reviewed responsibilities and safety measures according to existing LLM, service APIs, and in-house development methods?
3. AI Learning and Development
• Does the collection of public information meet the legitimate interests requirement?
• Is the reuse of existing data reasonably related to the initial purpose?
• Are there procedures for source verification and blocking false or malicious information?
• Have you applied safety measures such as pseudonymization, encryption, and minimizing access permissions?
• Are you operating a monitoring system to prevent data contamination?
System Application and Management
• Have you specifically detailed how and to what extent AI is utilized in your personal information handling policy?
• Is there a procedure for data subjects to exercise their rights to access, rectify, delete, stop processing, and reject automated decision-making?
• Have you established an Acceptable Use Policy (AUP) to prevent misuse?
• Are you operating personal information breach reporting and consultation channels?
• Are you regularly checking performance and verifying potential personal information exposure after model deployment?
5. Establishing Privacy Governance
• Does the CPO have independent authority and participate from the initial stages?
• Is the CPO maintaining a collaborative relationship with the CAIO and CISO?
• Is PbD operationalized to be integrated throughout the entire service process?
• Are there regular risk checks and internal audit procedures?
• Is the personal information processing documented and managed with a reporting system?
Related work cases that are good to see together


