In an era where artificial intelligence (AI) technologies such as ChatGPT have become digital assistants for many, aiding in both work tasks and daily activities such as planning trips, drafting letters, or even helping with tax calculations, many people prefer using AI instead of traditional information search methods. Of course, the AI's responses will be most accurate and tailored to the user only when sufficient information is provided to the AI. This leads to some important questions that many people are curious about:
These questions are linked to important issues, especially concerning the rights of personal data owners, which are protected under the Personal Data Protection Act (PDPA) B.E. 2562 (2019).
To simplify the explanation, this article will use ChatGPT as an example of AI tool and answer the following questions:
ChatGPT is an example of a Large Language Model (LLM), a type of artificial intelligence (AI) focused on understanding and generating text (Natural Language Processing - NLP), such as answering questions, holding conversations, or creating various documents.
ChatGPT does not "know its users inherently" but will only know what the users provide. Since the model does not have personal databases of users (such as emails, phone numbers, or social media accounts), it only processes the data input by the user. However, a key point to note is that users may unknowingly provide personal data, such as through a prompt like, "Help me draft a letter to my boss, Mr. Sirichai, from ABC Company, and include the phone number 09x-xxx-xxxx." This message contains personal data that can identify an individual directly or indirectly, such as names, affiliations, and phone numbers, which are considered "personal data" under the PDPA. This raises the next question:
ChatGPT does store messages (prompts) and conversations sent by users for processing and system improvement purposes. These include both the content that the user asks and the answers received from ChatGPT. OpenAI, the service provider of ChatGPT, records this data. OpenAI also collects other IT data, such as IP addresses, access times, and the type of browser used, for usage analysis. However, ChatGPT does not access other personal information stored on users' devices and does not record any audio or video unless sent through the chat interface.
ChatGPT retains user personal data only as long as necessary to provide services. The retention period depends on various factors, and in some cases, the retention period depends on user settings. For example, ChatGPT's temporary chats are not displayed in the user's history and are retained for no longer than 30 days for security purposes.
ChatGPT employs technical, organizational, and commercial measures to protect personal data from misuse, unauthorized access, disclosure, alteration, or destruction. However, users should exercise caution when providing personal data to ChatGPT. Additionally, ChatGPT is not responsible for any data breaches if privacy settings or security measures are bypassed.
The fact that ChatGPT stores users’ personal data means that the privacy of such data is also subject to risk.
These risks and concerns illustrate the potential impacts on personal data when using ChatGPT — both in everyday life and within organizations. The next question is how organizations and users can safely adopt ChatGPT in the workplace while complying with the minimum standards required under the PDPA. This leads to the next section:
When using ChatGPT within an organization, security measures must be in place to ensure that its use is safe, transparent, and compliant with personal data protection laws. Data controllers may need to adopt the following measures:
Providing training to personnel at all levels is one of the organizational security management measures required under the 2022 Notification of the Personal Data Protection Committee on Personal Data Security Measures. Such training may cover fundamental knowledge about using ChatGPT, the potential impacts and risks to personal data, as well as key principles of personal data protection under the PDPA. This helps enhance employees’ awareness, ensuring that they perform their duties safely and in compliance with the law.
In addition, having personnel who understand the risks associated with ChatGPT and personal data helps organizations prevent violations of data subjects’ rights, reduce risks arising from inappropriate use of technology, and strengthen organizational-level security measures in alignment with regulatory requirements.
When using ChatGPT, Data Controllers must assess whether the activity requires a Data Protection Impact Assessment (DPIA). Under data protection principles, a DPIA is required when the processing may present a high risk to the rights and freedoms of data subjects. This includes situations where ChatGPT processes sensitive personal data, supports automated decision-making that affects individuals, performs profiling, involves large-scale data processing, or transfers personal data to AI service providers located abroad. In such cases, the Data Controller should conduct a DPIA in advance to identify potential risks arising from the use of ChatGPT for processing personal data and to evaluate whether the organization’s security measures are adequate. For example, the Ministry of Labour of Thailand recommended in its 2024 guidelines that organizations conduct a risk assessment before deploying AI tools to prevent adverse impacts on employees and data subjects.
Conversely, if the use of GPT does not involve personal data—such as when only non-identifiable or anonymized information is used, or when ChatGPT operates within secure internal systems with adequate safeguards and without materially affecting the rights of data subjects—a DPIA may not be required. Nonetheless, Data Controllers should document the reasoning behind the decision not to conduct a DPIA to demonstrate compliance with the PDPA and the organization’s internal risk management standards.
Organizations should establish a clear ChatGPT usage policy to guide internal operations. This policy should define the scope and methods of use in alignment with the organization’s operational objectives and all applicable laws. The policy should include key principles such as:
For example, the Bank of Thailand has issued draft guidelines requiring financial institutions to establish AI usage policies that align with organizational goals, legal requirements, and principles of fairness. These guidelines also call for periodic reviews to keep up with technological advancements and emerging risks. Having such clear policies helps organizations create a consistent framework for decision-making regarding AI and ensures that all personnel operate in accordance with the same standards.
Thai personal data protection law does not explicitly guarantee the right not to be subject to automated decision-making, unlike Article 22 of the GDPR. Nevertheless, data controllers in Thailand should establish channels that allow data subjects to exercise their rights under the PDPA and request reviews of decisions made by ChatGPT. These channels may include:
These channels should be designed so that data subjects can easily exercise their rights—such as submitting requests through the organization’s website or internal systems—and organizations should set appropriate timelines for resolving issues and responding to requests, in line with PDPA requirements.
Avoiding AI or ChatGPT is not a viable solution. Instead, informed and responsible use ensures that AI operates within a controlled and manageable framework. Responsible adoption of such technologies is not merely a technical matter—it involves the combined interplay of data subjects, organizational processes, technology, and individual rights, all of which must progress together.
Organizations must therefore build awareness among personnel, define clear usage boundaries, and put in place robust policies. Doing so strengthens system security measures and ensures that data subjects have appropriate channels to review or challenge AI-generated decisions effectively.
By following these principles, data controllers can use ChatGPT confidently, transparently, and in a manner that respects privacy rights—while ensuring compliance with the PDPA and international standards, and fostering trust and accountability in the digital age.
Source: