From "Awareness Building" to "Privacy in Action"
At Data Privacy Day 2026, organized by the PDPC (Personal Data Protection Committee Office), the message was clear: this year’s theme is "Privacy in Action" — the era of simply knowing that PDPA exists is over. Organizations must implement it in practice and be able to prove compliance.
Figures disclosed by the PDPC as of February 2026:
- 2,672 cumulative complaints
- 8 administrative penalty orders across 5 cases
- Total fines of THB 21.5 million (approximately USD 654,690)
- Sectors under heightened scrutiny: e-commerce, healthcare, telecom, and government agencies
Eagle Eye Crawler — The Automated Detection System Changing the Game
What many organizations still do not realize is that the PDPC is no longer waiting for complaints to be filed. It is proactively using the Eagle Eye Crawler to crawl organizational websites and check whether:
- proper cookie consent is in place
- the privacy policy is complete
- data is being collected beyond what is necessary
- a Data Subject Request (DSR) process is available
Organizations whose websites still drop cookies without obtaining consent, or whose privacy policies are outdated, may be flagged even before any complaint is submitted.
AI Governance — New Rules on the Horizon
One of the most important developments in 2026 is the personal data protection guideline for AI currently being drafted by the PDPC. The key areas expected to be covered include:
Organizations Using AI Are Considered Data Controllers
If an organization uses AI to process personal data — whether for customer analytics, resume screening, credit scoring, or chatbots that collect user information — it must fully comply with PDPA in its capacity as a Data Controller.
Data Protection Impact Assessments (DPIAs) Will Be Required
The use of AI with personal data, especially for automated decision-making, is classified as high-risk processing and requires a DPIA before deployment.
Explainability Is Not Optional
Data subjects have the right to know the basis on which AI systems make decisions. For AI systems that materially affect individuals (such as loan rejection or failed candidate screening), organizations must be able to explain the reasoning behind the outcome.