Skip to main content
News

Thailand’s AI Law and Digital Platform Rules Are Now in Effect — What Businesses Need to Do "Now"

Thailand’s AI Act took effect on 1 March 2026, draft rules on high-risk AI are already open for public consultation, and the TCCT issued digital platform guidelines on 25 March 2026 — here’s everything Thai organizations need to know and act on immediately.

3 Apr 202613 minBaker McKenzie
ThailandAI LawRegulationDigital PlatformCompliancePDPATCCTETDA

Thailand’s AI Law and Digital Platform Rules Are Now in Effect — What Businesses Need to Do "Now"

This is no longer something that is “coming soon” — Thailand’s AI Act has been in force since 1 March 2026, and within the same month, the Trade Competition Commission of Thailand (TCCT) followed with digital platform guidelines.

If your organization develops AI, uses AI in business decision-making, or operates through online platforms, this article is your practical roadmap for what to do next.


Key timeline you should remember

Date Event
February 2026 PDPC opened a public consultation on draft personal data protection guidelines for AI
February 2026 Draft AI Royal Decree and draft Prime Ministerial Notification on high-risk AI published
1 March 2026 Thailand’s AI Act took effect
25 March 2026 TCCT multi-sided platform guidelines took effect
31 March 2026 Deadline for ride-sharing platform registration notification
29 April 2026 Public consultation closes on draft rules for direct sales/direct market businesses
August 2026 EU AI Act becomes fully applicable (affecting Thai companies serving the EU)

Thailand’s digital law landscape has rarely changed this quickly in a single month.


Thailand’s AI Act — the era of “no rules” is over

Core concept: a risk-based approach

The new law adopts a risk-based approach, similar to the EU AI Act, dividing AI into 3 levels:

  1. Unacceptable-risk AI — AI systems posing unacceptable risks to human rights are outright prohibited
  2. High-risk AI — must pass a conformity assessment before use, such as AI used in justice processes, credit decisions, or hiring
  3. Limited-risk AI — subject to transparency requirements, such as informing users they are interacting with AI

AI Governance Center

The law requires the establishment of an AI Governance Center under ETDA as the central authority for AI oversight and enforcement.

Rights of individuals affected by AI

This is where many organizations are still underestimating the impact — the law clearly grants rights to individuals affected by AI-driven decisions:

  • Right to know — individuals must be informed when AI is used in decisions that affect them
  • Right to explanation — if AI makes a negative decision (credit rejection, failed job screening), the organization must explain the key factors behind that decision
  • Right to human oversight — high-risk AI must be subject to human review and supervision

Draft rules for high-risk AI — what businesses should prepare for

The draft AI Royal Decree and draft Prime Ministerial Notification listing high-risk AI systems, published in February 2026, introduce major additional obligations:

For Providers (AI developers/service providers)

  • Register both themselves and their AI systems with the regulator before placing them on the market
  • Implement a Risk Management system based on international standards such as ISO/IEC 42001:2023
  • Report serious incidents caused by AI to the regulator
  • Foreign providers must appoint a representative in Thailand in writing

For Deployers (organizations using AI)

  • Ensure human oversight in decision-making processes
  • Maintain operational logs for AI systems
  • Verify the quality of input data fed into the system
  • Notify affected individuals when AI decisions may impact them

The key point: although the draft Royal Decree is still under consultation, the main AI Act is already in force — organizations that prepare now will have a major advantage.


PDPC’s AI + PDPA guidance — two laws you must handle together

In February 2026, Thailand’s Personal Data Protection Committee (PDPC) opened a public consultation on the draft personal data protection guidelines for AI development and use, covering key issues such as:

  • Roles of involved parties — defining who acts as Data Controller or Processor in the AI chain
  • Restrictions on model training use — Data Processing Agreements must clearly prohibit using data to train AI models without authorization
  • DPIA for high-risk AI — a Data Protection Impact Assessment must be completed before using AI that processes personal data in high-risk scenarios
  • Security measures throughout the AI lifecycle — from development and testing to deployment and decommissioning

Why this matters so much

Thailand’s AI Act and the PDPA do not operate separately — any organization using AI to process personal data must comply with both laws at the same time. This matters even more now that the PDPC uses tools like the Eagle Eye Crawler to proactively scan websites for compliance issues.


TCCT guidelines for digital platforms — a major shift from 25 March 2026

The Trade Competition Commission of Thailand (TCCT) issued guidelines on unfair trade practices, monopolization, and conduct that may reduce or restrict competition on multi-sided platforms. The guidelines were published in the Royal Gazette on 24 March 2026 and took effect on 25 March 2026.

Which platforms are covered?

The guidelines define a “multi-sided platform” as an intermediary connecting two or more user groups to interact, transact, or depend on each other within the same ecosystem. This includes:

  • E-commerce platforms (connecting sellers, buyers, logistics, advertising, and payments)
  • Ride-sharing platforms
  • Social commerce platforms
  • Food delivery platforms

Prohibited conduct

Pricing-related conduct:

  • Parallel fee-setting arrangements resembling collusion
  • Price discrimination against sellers in comparable circumstances
  • Sudden changes to fee structures without prior notice

Commercial conduct:

  • Opaque algorithmic ranking of sellers — restricting product visibility through non-transparent algorithms is prohibited
  • Self-preferencing — platforms may not unfairly favor their own products or those of affiliated companies
  • Forcing use of platform logistics — platforms may not require sellers to use logistics providers selected by the platform
  • Using seller data for the platform’s own advantage — platforms may not use seller sales data or behavioral data to compete directly against those sellers

Penalties

Conduct found to significantly restrict competition may lead to administrative sanctions or even criminal penalties under the Trade Competition Act B.E. 2560 (2017).


ETDA — the “co-creation regulator” and its three principles

The Electronic Transactions Development Agency (ETDA), which describes itself as a “co-creation regulator,” has set out its 2026 digital platform regulatory roadmap under the Royal Decree on Digital Platform Service Businesses B.E. 2565 (2022), based on three principles:

  1. Practicable — rules must be workable in real life, not just look good on paper
  2. Verifiable — compliance must be demonstrable, not just stated in policy
  3. Shared Responsibility — platforms, sellers, and buyers all share responsibility

What ETDA is pushing in 2026

Area Details
Product/service standards on platforms Working with the FDA and TISI to inspect products on 21 assigned platforms
Fairness of fees Oversight of commission and service fee transparency
Online fraud prevention Risk-based identity verification systems for sellers and advertisers
Social Commerce Drafting rules for social media platforms engaged in commercial activity

Why compliance is urgent — the numbers speak for themselves

If you still think, “we can deal with this later,” consider these figures:

  • Thailand faces more than 3,200 cyberattacks per week — 164% above the global average
  • 109,000+ ransomware incidents — the highest in Southeast Asia
  • 63% of organizations in Thailand have experienced a data breach
  • 52% of ransomware-hit organizations paid the ransom
  • The PDPC has already taken enforcement action — THB 21.5 million in total fines across 5 cases
  • Average ransomware damage is estimated at USD 1.8–5 million per incident

The new AI law requires providers to report serious incidents — if an AI system is attacked and the incident is not reported, organizations may face exposure under both cybersecurity law and AI law at the same time.


Sector-specific guidelines already in place

Beyond the main AI law, several industry-specific guidelines are already applicable:

Authority Topic Year
Bank of Thailand (BOT) AI lifecycle management, risk assessment, data governance Sep 2025
Securities and Exchange Commission (SEC) Fairness, compliance, accountability, transparency for AI in capital markets 2025
Office of Insurance Commission (OIC) Risk management, security, consumer protection for AI in insurance 2025
PDPC DPIA, lifecycle security for AI using personal data Feb 2026
National Cyber Security Agency (NCSA) AI guidance aligned with ISO/IEC 42001:2023 Sep 2025

If your organization operates in one of these sectors, you must comply with both the main AI law and the sector-specific guidance.


Compliance checklist — 12 actions to take now

AI

  • Inventory all AI systems used by the organization — including third-party AI, not only tools you developed internally
  • Classify the risk level of each AI system (unacceptable / high / limited)
  • Build a Risk Management Framework for high-risk AI in line with ISO/IEC 42001:2023
  • Establish human oversight processes — a person must review AI decisions that affect individuals
  • Prepare explainability mechanisms — you must be able to explain how and why the AI made its decision
  • Set up incident reporting procedures for serious AI-related incidents

PDPA + AI

  • Conduct DPIAs for every AI system that processes personal data
  • Review Data Processing Agreements — they must prohibit unauthorized use of data for model training
  • Assess cross-border data transfers — if you use overseas AI APIs, you need SCCs or equivalent safeguards

Digital platforms

  • Check whether your business qualifies as a multi-sided platform under the TCCT definition
  • Review ranking algorithms — they must be transparent and non-discriminatory
  • Review fee policies — they must be fair and changes must be communicated in advance

Impact on Thai startups and tech companies

These rules do not affect only large corporations — startups with AI at the core of their business must also comply.

What needs to change

Startups building AI products: if the product falls into a high-risk category (such as AI for credit scoring, candidate screening, or medical diagnosis), it must be registered with the regulator before going to market. That means additional cost and lead time must now be part of planning.

Companies using foreign AI services: overseas AI providers serving Thailand must appoint a local representative. If your AI provider has not done so, you may need to switch providers.

Platform operators: both the business model and the underlying algorithms may need to be redesigned under the stricter TCCT framework.

The opportunity on the other side

At the same time, these laws can create a competitive advantage for organizations that prepare well:

  • Greater trust and credibility with customers and business partners
  • Better readiness to expand into overseas markets with strict AI rules, such as the EU, South Korea, and Vietnam
  • Lower legal risk that could affect future fundraising rounds

AI law + digital platform regulation + PDPA = a new legal ecosystem

The most important thing to understand is that these laws do not operate in silos — they form an interconnected digital legal ecosystem:

AI Act → regulates AI development and deployment
     ↕
PDPA → protects personal data processed by AI
     ↕
Digital Platform Royal Decree → regulates platforms using AI in service delivery
     ↕
Trade Competition Act (TCCT) → prevents AI-enabled market dominance and anti-competitive conduct

Organizations that focus on only one law may miss the bigger picture — what’s needed is a governance framework that covers the entire ecosystem.


What Enersys recommends

From our experience helping organizations build compliance frameworks and implement privacy and data governance systems, the most successful organizations tend to have the same things in common:

  1. Start with a gap assessment — understand where you are now before planning where you need to go
  2. Design a cross-regulation governance framework — don’t tackle each law separately; build one framework covering AI, PDPA, and platform regulation
  3. Use technology to manage compliance — tracking consent, DPIAs, data mapping, and incident reporting manually does not scale
  4. Train your teams — these new laws are not just for legal teams; they affect everyone who uses or develops AI

Enersys has a team ready to help with AI Governance Frameworks, PDPA compliance, and Digital Platform Strategy — from assessment through implementation.

Contact us today to discuss your compliance priorities before the grace period ends.


References

"Empowering Innovation,
Transforming Futures."

ติดต่อเราเพื่อทำให้โปรเจกต์ของคุณเป็นจริง