Latest News & Insights

Athentic Consulting’s team of experienced experts bring you the
latest news and insights in law and regulations.

ChatGPT and Personal Data: What Organizations Must Know

In an era where artificial intelligence (AI) technologies such as ChatGPT have become digital assistants for many, aiding in both work tasks and daily activities such as planning trips, drafting letters, or even helping with tax calculations, many people prefer using AI instead of traditional information search methods. Of course, the AI's responses will be most accurate and tailored to the user only when sufficient information is provided to the AI. This leads to some important questions that many people are curious about:

  • "How much does ChatGPT know about its users?"
  • "Does ChatGPT store user data?"
  • "How is the data provided by users stored?"

These questions are linked to important issues, especially concerning the rights of personal data owners, which are protected under the Personal Data Protection Act (PDPA) B.E. 2562 (2019).

To simplify the explanation, this article will use ChatGPT as an example of AI tool and answer the following questions:


What is ChatGPT?

ChatGPT is an example of a Large Language Model (LLM), a type of artificial intelligence (AI) focused on understanding and generating text (Natural Language Processing - NLP), such as answering questions, holding conversations, or creating various documents.


How much does ChatGPT "know" about its users?

ChatGPT does not "know its users inherently" but will only know what the users provide. Since the model does not have personal databases of users (such as emails, phone numbers, or social media accounts), it only processes the data input by the user. However, a key point to note is that users may unknowingly provide personal data, such as through a prompt like, "Help me draft a letter to my boss, Mr. Sirichai, from ABC Company, and include the phone number 09x-xxx-xxxx." This message contains personal data that can identify an individual directly or indirectly, such as names, affiliations, and phone numbers, which are considered "personal data" under the PDPA. This raises the next question:


Does ChatGPT store user data?

ChatGPT does store messages (prompts) and conversations sent by users for processing and system improvement purposes. These include both the content that the user asks and the answers received from ChatGPT. OpenAI, the service provider of ChatGPT, records this data. OpenAI also collects other IT data, such as IP addresses, access times, and the type of browser used, for usage analysis. However, ChatGPT does not access other personal information stored on users' devices and does not record any audio or video unless sent through the chat interface.


How is the data provided by users stored?

ChatGPT retains user personal data only as long as necessary to provide services. The retention period depends on various factors, and in some cases, the retention period depends on user settings. For example, ChatGPT's temporary chats are not displayed in the user's history and are retained for no longer than 30 days for security purposes.

ChatGPT employs technical, organizational, and commercial measures to protect personal data from misuse, unauthorized access, disclosure, alteration, or destruction. However, users should exercise caution when providing personal data to ChatGPT. Additionally, ChatGPT is not responsible for any data breaches if privacy settings or security measures are bypassed.


Privacy Risks and Precautions

The fact that ChatGPT stores users’ personal data means that the privacy of such data is also subject to risk.

  • If users input personal information—such as full name, phone number, address, or national ID number—this information is immediately sent to OpenAI’s servers. If a data breach or unauthorized access occurs, it may result in a violation of personal data protection laws.
  • Data sent to ChatGPT is processed on servers located abroad. Under the PDPA, any transfer of personal data outside the country is permitted only if the receiving country provides an adequate level of data protection. Therefore, providing personal data to ChatGPT constitutes an international data transfer and must comply with PDPA requirements on cross-border data transfers.
  • Even if users delete their chat history from the chat interface or use temporary chat mode, the data may still be retained in OpenAI’s system for a period of time (up to 30 days). This means that the chat history may still be accessible even after the user deletes it.
  • ChatGPT may also generate inaccurate information, a phenomenon known as AI hallucination. This occurs when the AI—particularly large language models like ChatGPT—produces responses that sound plausible but are actually incorrect, inaccurate, or entirely fabricated. For example, a user may find that ChatGPT provides an incorrect date of birth. The user may then attempt to exercise their rights under the GDPR to request that the information be corrected. However, OpenAI cannot modify or delete such erroneous information because ChatGPT’s processing is carried out by automated systems and fixed algorithms. Even OpenAI cannot manually alter the model’s output.

These risks and concerns illustrate the potential impacts on personal data when using ChatGPT — both in everyday life and within organizations. The next question is how organizations and users can safely adopt ChatGPT in the workplace while complying with the minimum standards required under the PDPA. This leads to the next section:


How to Safely Use ChatGPT in an Organization Under the PDPA

When using ChatGPT within an organization, security measures must be in place to ensure that its use is safe, transparent, and compliant with personal data protection laws. Data controllers may need to adopt the following measures:

1. Training for Employees on ChatGPT Usage and Data Protection (PDPA Awareness)

Providing training to personnel at all levels is one of the organizational security management measures required under the 2022 Notification of the Personal Data Protection Committee on Personal Data Security Measures. Such training may cover fundamental knowledge about using ChatGPT, the potential impacts and risks to personal data, as well as key principles of personal data protection under the PDPA. This helps enhance employees’ awareness, ensuring that they perform their duties safely and in compliance with the law.

In addition, having personnel who understand the risks associated with ChatGPT and personal data helps organizations prevent violations of data subjects’ rights, reduce risks arising from inappropriate use of technology, and strengthen organizational-level security measures in alignment with regulatory requirements.

2. Conduct a Data Protection Impact Assessment (DPIA)

When using ChatGPT, Data Controllers must assess whether the activity requires a Data Protection Impact Assessment (DPIA). Under data protection principles, a DPIA is required when the processing may present a high risk to the rights and freedoms of data subjects. This includes situations where ChatGPT processes sensitive personal data, supports automated decision-making that affects individuals, performs profiling, involves large-scale data processing, or transfers personal data to AI service providers located abroad. In such cases, the Data Controller should conduct a DPIA in advance to identify potential risks arising from the use of ChatGPT for processing personal data and to evaluate whether the organization’s security measures are adequate. For example, the Ministry of Labour of Thailand recommended in its 2024 guidelines that organizations conduct a risk assessment before deploying AI tools to prevent adverse impacts on employees and data subjects.

Conversely, if the use of GPT does not involve personal data—such as when only non-identifiable or anonymized information is used, or when ChatGPT operates within secure internal systems with adequate safeguards and without materially affecting the rights of data subjects—a DPIA may not be required. Nonetheless, Data Controllers should document the reasoning behind the decision not to conduct a DPIA to demonstrate compliance with the PDPA and the organization’s internal risk management standards.

3. Clear Policies and Guidelines for Using ChatGPT

Organizations should establish a clear ChatGPT usage policy to guide internal operations. This policy should define the scope and methods of use in alignment with the organization’s operational objectives and all applicable laws. The policy should include key principles such as:

  • Ensuring that the use of ChatGPT does not exceed the purposes of personal data processing.
  • Respecting and upholding the rights of personal data subjects.
  • Strict compliance with personal data protection laws.
  • Defining the roles and responsibilities of executives, employees, and relevant personnel in managing the risks associated with ChatGPT usage.

For example, the Bank of Thailand has issued draft guidelines requiring financial institutions to establish AI usage policies that align with organizational goals, legal requirements, and principles of fairness. These guidelines also call for periodic reviews to keep up with technological advancements and emerging risks. Having such clear policies helps organizations create a consistent framework for decision-making regarding AI and ensures that all personnel operate in accordance with the same standards.

4. Complaint Channels for Data Subjects

Thai personal data protection law does not explicitly guarantee the right not to be subject to automated decision-making, unlike Article 22 of the GDPR. Nevertheless, data controllers in Thailand should establish channels that allow data subjects to exercise their rights under the PDPA and request reviews of decisions made by ChatGPT. These channels may include:

  • Organizations should provide clear notice and obtain consent when appropriate. For example, if AI is used to make decisions that affect an individual’s rights—such as loan approvals or job applicant screening—the organization should clearly inform the data subject that the decision is being made by an automated system. When necessary and appropriate, consent may be required for the use of AI in such processes. Where feasible, organizations should also offer alternatives for the data subject and rely on other methods of personal data processing.
  • Organizations should establish a process allowing data subjects to request human review. Procedures should enable data subjects to request that an AI-generated decision be reassessed by a human, especially in cases where the decision has legal consequences or significantly affects their rights (e.g., being denied a service). Although Thailand’s PDPA does not explicitly provide this right, the GDPR clearly does—granting data subjects the right to request human intervention and to contest automated decisions. Implementing this principle can help organizations reduce reputational risks that may arise from unfair AI-driven decisions.

These channels should be designed so that data subjects can easily exercise their rights—such as submitting requests through the organization’s website or internal systems—and organizations should set appropriate timelines for resolving issues and responding to requests, in line with PDPA requirements.

Avoiding AI or ChatGPT is not a viable solution. Instead, informed and responsible use ensures that AI operates within a controlled and manageable framework. Responsible adoption of such technologies is not merely a technical matter—it involves the combined interplay of data subjects, organizational processes, technology, and individual rights, all of which must progress together.

Organizations must therefore build awareness among personnel, define clear usage boundaries, and put in place robust policies. Doing so strengthens system security measures and ensures that data subjects have appropriate channels to review or challenge AI-generated decisions effectively.

By following these principles, data controllers can use ChatGPT confidently, transparently, and in a manner that respects privacy rights—while ensuring compliance with the PDPA and international standards, and fostering trust and accountability in the digital age.


Source:

  •   Comparisons between European Union’s General Data Protection Regulation and Japan’s Act on the Protection of Personal Information, Nishimura & Asahi, April 7, 2022, Nishimura & Asahi Knowledge, https://www.nishimura.com/en/knowledge/publications/20220407-32816
  •   Nishimura & Asahi, “Comparisons between European Union’s General Data Protection Regulation and Thailand’s Personal Data Protection Act,” Nishimura & Asahi Knowledge (7 April 2022), sec. “The similarities in individuals’ rights …” (highlighting “restriction of data portability …”), accessed 19 November 2025, https://www.nishimura.com/en/knowledge/publications/20220407-32816
  • Thailand PDPA vs GDPR: Differences Unraveled, CAPTAIN COMPLIANCE, https://captaincompliance.com/education/thailand-pdpa-vs-gdpr/ (last visited Nov. 19, 2025).
  •   Formichella et al., Artificial Intelligence, Machine Learning, and Big Data in Thailand: Legal and Regulatory Developments 2025, LEXOLOGY (July 8, 2025), https://www.lexology.com/library/detail.aspx?g=54e27706-1d02-4e28-8886-a1ca53e87b91.
  •   Bank of Thailand, Draft Policy on Risk Management for the Use of Artificial Intelligence (AI), BOT Public Hearing (June 12, 2025), https://www.bot.or.th/th/laws-and-rules/public-hearing/public-hearing-20250612.html (last visited Nov. 26, 2025).
  • ChatGPT ภาพหลอน และการละเมิดข้อมูลส่วนบุคคล, BANGKOKBIZNEWS (May 18, 2024), https://www.bangkokbiznews.com/tech/gadget/1127464
  • Privacy Policy, https://openai.com/policies/row-privacy-policy/ (last visited Nov. 7, 2025).
  •  Data Controls FAQ, https://help.openai.com/en/articles/7730893-data-controls-faq

Punsuree Kanjanapong
Lead - Legal Technology Counselor
Pone Pongsumrankul
Legal Technology Counselor
Patipon Prakobkit
Legal Technology Counselor
About ATHENTIC News & Insights Our Services Contact us Career