Anthropic refuses Pentagon's 'any lawful use' demand — Claude hits #1 on US App Store as users flood in
The Department of Defense demanded Anthropic remove two contract restrictions: a prohibition on using Claude for domestic mass surveillance, and a prohibition on powering fully autonomous weapons systems without human oversight. Defense Secretary Hegseth warned Anthropic that refusal would result in contract termination and designation as a national security risk. CEO Dario Amodei rejected the terms. The Trump administration then issued a government-wide order to phase out Anthropic technology, blacklisted the company as a supply chain risk, and barred defense contractors from working with them. Simultaneously, OpenAI signed a $200M Pentagon deal on February 27. Consumer backlash drove Claude to the #1 position in US App Store downloads, overtaking ChatGPT. Anthropic reported free signups up 60% since January, daily registrations tripled, and paid subscriptions more than doubled — all-time records broken daily that week. Anthropic subsequently sued the DoD.
This is the first major test of whether an AI company will hold a safety line under direct government pressure, and the consumer market responded decisively. The "QuitGPT" movement, 700+ Google and OpenAI employee signatures on a solidarity letter, and the App Store ranking together form a meaningful data point: a non-trivial segment of AI users will reward companies for standing on principle. The competitive implications are real — Anthropic gained subscriber momentum it couldn't have bought with marketing spend. OpenAI took reputational damage Sam Altman himself called "opportunistic and sloppy."
Every story from each day, delivered at midnight UTC.