PDPA and AI — Now Formally Converging
In February 2026, Thailand’s Personal Data Protection Committee Office (PDPC) released the draft Personal Data Protection Guidelines for the Development and Use of Artificial Intelligence for public consultation across sectors.
It may sound technical, but in reality it directly affects every organization currently using AI — whether for customer service chatbots, customer data analytics systems, or even HR tools that use AI to screen applications.
Key Issues Organizations Need to Understand
1. Organizations that deploy AI are Data Controllers
The draft guidelines clearly state that organizations using AI with personal data are considered data controllers, meaning they bear the full set of responsibilities required under the PDPA, including consent, data minimization, and transparency.
This applies not only to tech companies building AI, but also to any organization that adopts and uses AI in practice.
2. Vendors that use data to train AI may be classified as Data Controllers
This is a critical issue for organizations using external AI services. If a vendor uses your customer data to train its model without a clearly defined agreement, that vendor may be reclassified as a data controller. This is something organizations should review in their contracts immediately.
3. What the guidelines cover
The draft focuses on four main areas:
- Consent — Clear consent must be obtained before using data in AI, not merely through general terms in an agreement
- Data Minimization — AI should use only the data that is genuinely necessary, rather than ingesting all available data for processing
- Transparency — Users must understand what decisions AI is making, how those decisions are made, and on what basis
- Data Subject Rights — Data subjects retain their rights to object, correct, or delete data used in AI systems