Skip to main content
News

Claude Reaches No. 1 on the App Store After the Pentagon Bans Anthropic — A Sign That Ethical AI Is What the Market Wants

Anthropic’s Claude surged to No. 1 on the App Store after the Pentagon blacklisted the company for refusing to grant unrestricted AI access, while OpenAI accepted a $200M military deal — Claude’s user base doubled and its enterprise market share jumped to 40%

8 Mar 20265 minCNN
AnthropicClaudePentagonAI EthicsEnterprise AI

When Being Banned Becomes Anthropic’s Best Marketing Campaign

In the first week of March 2026, an unexpected event shook the AI industry — the U.S. Department of Defense (Pentagon) decided to blacklist Anthropic, the company behind Claude, after it refused to provide access to an AI model without safety restrictions for military use. The outcome was the exact opposite of what the Pentagon likely intended — Claude rose to No. 1 on the App Store in less than 48 hours, its user base doubled, and Anthropic’s enterprise AI market share climbed to 40%.

This was not just another technology headline — it was a turning point proving that AI ethics are not a business obstacle, but a competitive advantage.

The Trigger: The Pentagon Wanted Unrestricted AI

The story began in late February 2026, when the Pentagon opened bidding for a $200 million AI contract for intelligence analysis and military decision-support systems. One key requirement was that AI providers had to allow military agencies to use the model without safety guardrails that would limit usage.

Anthropic gave a clear refusal. Dario Amodei, the company’s CEO, stated that AI safety principles were non-negotiable, regardless of the customer. The company reaffirmed that Claude was designed to refuse instructions that could cause harm, and that it would not create a special version with the safety layer removed.

By contrast, OpenAI chose to accept the offer. Sam Altman explained that working with government was necessary to ensure AI would be used responsibly. As a result, the $200 million deal went to OpenAI, while the Pentagon blacklisted Anthropic from U.S. government procurement programs.

Chain Reaction: Users Rallied Behind Claude

What followed was a phenomenon analysts described as the “Streisand Effect of the AI industry” — the more the Pentagon tried to pressure Anthropic, the more people rallied to support the company.

Data from App Annie and Sensor Tower showed that the Claude app on iOS jumped from No. 12 in the Productivity category to No. 1 on the App Store in both the U.S. and globally in less than two days. Downloads rose 340% compared with the previous week.

Claude’s global user base doubled from around 45 million to more than 90 million monthly users. ChatGPT still maintained a user base of roughly 200 million, but Claude’s growth rate was clearly much stronger.

On social media, the hashtags #StandWithClaude and #AIWithEthics became the No. 1 trending topics on X (Twitter) in 15 countries. Tens of thousands of software developers switched from the OpenAI API to the Claude API, with some even publishing detailed blog posts explaining why they migrated.

Enterprise Market Share Surges to 40%

The most significant impact was not in the consumer market, but in the enterprise AI market, where Anthropic’s share had been around 28% in Q4 2025 before rising sharply to 40% in early March 2026.

A report from Fortune said that several large companies accelerated their migration from ChatGPT Enterprise to Claude Enterprise, citing three main reasons:

  • Confidence in Data Privacy: Anthropic has a clear policy that it does not use customer data to train models and does not allow third-party access to data without authorization.
  • Regulatory Risk: Many organizations were concerned that using AI systems that grant special government access could conflict with personal data protection laws in their own countries.
  • Brand Alignment: Companies that prioritize ESG (Environmental, Social, Governance) saw the choice of ethical AI as consistent with their corporate values.

Axios reported that the CTO of a Fortune 500 company said, “We’re not just choosing the most capable AI model. We’re choosing an AI partner we trust not to abandon its own principles.”

A Defining Split in the AI Industry

This event created a new fault line in the AI industry, clearly dividing players into two camps:

The “AI Safety First” camp, led by Anthropic, maintains that safety guardrails are non-removable under any circumstances, no matter who the customer is. This approach received support from the EU, which issued a statement praising Anthropic and noting that the principle is aligned with the EU AI Act.

The “Pragmatic Deployment” camp, led by OpenAI, argues that working with governments is the best way to ensure responsible AI use, rather than leaving governments to develop AI independently without the necessary expertise.

Analysts at Goldman Sachs estimate that the AI market is entering a “Trust Economy” in which confidence in an AI provider carries as much weight as — or even more than — a model’s technical performance when enterprises choose an AI partner.

Figures from Bloomberg Intelligence support this view, showing that Anthropic’s valuation rose from $60 billion to $85 billion within a single week after the Pentagon incident. Microsoft’s stock (as OpenAI’s lead investor) was not significantly affected, but developer sentiment toward OpenAI declined noticeably.

Lessons for the Technology Industry

This event sends three important signals that the global technology industry should watch closely:

1. Ethics are a commercial advantage — Anthropic doubling its user base after rejecting a $200 million deal proves that in an era when consumers and enterprises are increasingly aware of AI safety, standing by principle can generate substantial business value.

2. The enterprise market prioritizes trust — The jump in enterprise market share from 28% to 40% in just a few weeks reflects that enterprise buyers often make decisions based more on trust than on benchmarks.

3. The regulatory landscape is changing — Governments in many countries are beginning to enact laws requiring AI systems to include safety guardrails that cannot be removed. Choosing an AI partner that already operates on this principle can reduce long-term compliance risk.

Implications for Thai Organizations

For Thai organizations, this event carries several important implications. Thailand’s Personal Data Protection Act (PDPA) sets a high standard for data protection. Choosing AI systems with clear ethical and safety principles is therefore not just a matter of values, but a legal necessity.

Organizations currently planning their AI strategy should consider AI providers with clear data privacy policies, safety guardrails that cannot be bypassed, and positions aligned with international data protection regulations.

Enersys’s Genesis AI is built on Responsible AI principles as a core foundation for developing AI solutions for Thai enterprises, with a strong focus on data privacy, transparency, and safety guardrails aligned with international standards. This helps ensure that enterprise AI adoption is responsible and compliant with PDPA requirements.


Sources:

"Empowering Innovation,
Transforming Futures."

ติดต่อเราเพื่อทำให้โปรเจกต์ของคุณเป็นจริง