Big Technology

Big Technology

Anthropic Takes The Pentagon To Court This Week. Here’s What Could Happen.

Could Anthropic make its way back into a multi-billion dollar opportunity with the U.S. government? A federal judge could grant it temporary relief this week.

Marty Swant's avatar
Alex Kantrowitz's avatar
Marty Swant and Alex Kantrowitz
Mar 23, 2026
∙ Paid

On Tuesday, a federal judge in California will hear Anthropic’s argument to freeze the supply-chain risk designation the government slapped on it after it refused to grant the Pentagon the right to use its Claude models for domestic surveillance or autonomous warfare. The hearing is a key next step in Anthropic’s parallel lawsuits against the U.S. government, with one filed in California’s northern district and another in Washington, D.C.

Even as the case gains momentum, the Trump administration is already exploring new ways to force companies to scrap their AI safety and privacy protocols. Over the weekend, a new report from The Lever detailed a proposal that would require AI vendors to make their tech available for “any lawful government purpose” even if a company objects.

Ahead of this week’s hearing, filings from Anthropic, the government, and third parties help paint a picture of what’s at stake and the kind of legal arguments to expect. Here’s a brief primer:

The legal arguments:

Winning a preliminary injunction, which would freeze the government’s order to strip Anthropic out of all U.S. federal agencies, requires showing four things: Anthropic will have to show it’s likely to win the full case, that it would experience irreparable harm without the injunction, that it has more to lose than the government if the preliminary injunction is denied, and that the injunction would be in the public’s interest.

Anthropic’s case hinges on four main arguments:

  • Its refusal to allow Claude for autonomous weapons and mass surveillance is protected by the First Amendment.

  • The company’s Fifth Amendment protections were violated when the government blacklisted it without adequate notice or due process.

  • Secretary of Defense Pete Hegseth exceeded his statutory authority by misusing a law reserved for foreign adversaries, by acting arbitrarily.

  • President Trump’s government-wide directive goes beyond his authority, citing older cases from the 1950s and 1990s and more recent decisions like a court ruling Trump lacks authority to impose tariffs.

For the government to prevent an injunction, it’ll need to show Anthropic’s refusal to sign a contract term is commercial conduct and not protected speech. Lawyers will also argue the government has broad authority to choose its vendors and set contract terms as it sees fit. They’ll also need to show Anthropic’s privileged access to Claude creates a genuine national security risk, including the possibility it could alter the model’s behavior during active military operations.

Continued below: Anthropic’s chances of winning and what happens next…


Stop Phishing Cold. Secure Every Identity with Yubico. (sponsor)

Is your organization still relying on vulnerable, mobile-based MFA? It’s time to move beyond the security gaps of the past.

Yubico provides the gold standard in phishing-resistant authentication through device-bound passkeys. Our hardware-backed security keys offer a seamless, user-centric experience while eliminating the risk of account takeovers. Trusted by 19 of the top 20 global tech companies, Yubico empowers you to deploy a true zero-trust architecture at scale.

Download our latest guide, “Getting Started with Passwordless,” to learn how to future-proof your digital identity strategy today. Start leading with the highest level of assurance.

Download Now


Anthropic’s chances of winning:

Anthropic’s path for an early injunction is plausible. The company’s filings cite recent cases including Sen. Mark Kelly’s recent lawsuit against Hegseth, where a court ruled the government’s retaliation for his political speech was unconstitutional. (That case is still under appeal.)

Various filings suggest the law is on Anthropic’s side. Last week, a bipartisan group of nearly 150 retired federal and state judges filed an amicus brief supporting Anthropic in its parallel case in Washington D.C., arguing the law doesn’t provide a national security exception for the government to avoid judicial review.

Others point to recent decisions including the Supreme Court’s 2024 decision in Moody v. NetChoice, which ruled tech companies’ content decisions are protected by the First Amendment. One filing by a coalition of civil liberties groups suggests Anthropic should be given the same protections for decisions with its AI guardrails. Another filed by a group of retired military officers argues Hegseth exceeded his authority and that removing Anthropic now would hurt active military operations.

The path ahead:

A ruling from California’s Northern District could come within days or weeks. Based on the previous status briefing, U.S. Judge Rita Lin seemed to understand the case’s urgency and what’s at stake for both sides, even moving up the hearing after the U.S. government declined to commit to not taking any further action against Anthropic.

Even rivals have formally filed in support of Anthropic’s case. One amicus brief signed by dozens of AI experts argues Anthropic’s safety concerns are technically legitimate and not merely ideological. Signed by Google chief scientist Jeff Dean and others from both Google and OpenAI, the filing suggests the government’s supply-chain risk designation could have “serious ramifications for our industry” and expressed concerns about AI being misused for government surveillance.

“The mere existence of such a capability in government hands — even if never activated against a specific individual — changes the character of public life in a democracy,” the AI experts wrote.

Big Technology is a reader-supported publication. To receive new posts and support our work, consider becoming a paid subscriber.

The Intelligence Report

  • An anti-AI march in SF took place over the weekend, with protestors giving speeches in front of Anthropic’s headquarters. Signs had slogans like “Don’t build skynet,” “Shut down OpenAI,” and “Stop The AI Race.”

  • Jeff Bezos is reportedly in talks to raise $100 billion for a new AI manufacturing fund that would involve using AI to buy and transform old manufacturing firms.

  • A rogue AI agent gave a Meta employee wrong technical advice that led to user data briefly being exposed.

  • OpenAI published new details about its ad policies, offering more context for where ads in ChatGPT can appear, which advertisers are allowed, how sensitive topics are handled, and which types of conversations are still off-limits.

  • Meta is keeping its metaverse alive after all, choosing to keep Horizon Worlds VR online just days after announcing plans to kill it.

  • Speaking of the Metaverse, credit unions are setting up shop in Roblox to reach younger audiences.

  • The White House released a new AI policy framework that includes pushing Congress to preempt many state AI laws. It also calls on Congress to make it easier to build AI data centers, to strengthen tools against AI scams, to improve copyright and digital-replica protections for creators and publishers and to expand AI training programs.

  • Nearly a dozen major tech companies and retailers announced a pledge to share information about how scammers are using their platforms and services. It also calls for collaboration to improve AI-based tools for fraud detection and other solutions.

  • Alex joined CNBC to discuss the latest in the Anthropic saga, and Bezos’s AI for Bezos’s new fund.

Will OpenAI’s Superapp Help It Refocus?

OpenAI is planning to simplify its product lineup with a new desktop “superapp” combining ChatGPT, Codex coding platform, and Atlas browser into a single destination. The new desktop app is part of a plan to streamline offerings and improve user experience for consumers, engineering and business customers.

The “super app” project will be led by OpenAI applications chief Fidji Simo, with president Greg Brockman helping revamp the product and teams working on it. At a recent all-hands meeting, Simo told employees that the company has become too distracted by different “side quests” while Anthropic gains momentum with enterprise and coding customers.

Call it a pivot or call it renewed focus, but OpenAI’s changes could also be a sign it’s worried about losing the lead. Simo and other top execs like CEO Sam Altman and chief research officer Mark Chen have spent recent weeks reviewing OpenAI’s entire product portfolio, according to The Wall Street Journal, which first reported the super app.

Anthropic already features its flagship Claude chatbot within the same desktop app as its tools for coding and computer use, but OpenAI still has separate apps for ChatGPT, Sora, Codex and its web browser. Meanwhile, OpenAI also has teams working on a secret hardware product being led by Jony Ive.

“Companies go through phases of exploration and phases of refocus; both are critical,” Simo wrote Thursday on X. “But when new bets start to work, like we’re seeing now with Codex, it’s very important to double down on them and avoid distractions. Really glad we’re seizing this moment.”

OpenAI also faces growing backlash after it rushed to win a contract with the defense department and replace Anthropic. According to the grassroots website QuitGPT, more than 4 million people have vowed to delete ChatGPT, up from 1.5 million on March 4 a few days after landing the new defense contract.

A New AI Film For Non-AI Experts

On Friday, a new AI documentary will arrive in theaters and give mainstream audiences their first real introduction to the debate that has largely consumed Silicon Valley and Washington, D.C.

“The AI Doc: Or How I Became an Apocaloptimist” explores AI’s potential promise and peril through the perspective of co-director Daniel Roher, who asks dozens of AI experts earnest questions about how AI is reshaping the world his son is about to be born into.

The result is a film that spans the full spectrum of AI opinion — doomers, skeptics, realists, and accelerationists — without stacking the deck. It’s already been screened at Sundance and SXSW, aims to help people get a better understanding of AI and have proactive awareness for what’s at stake. In an interview this month, the creators noted it’s disappointing how unprepared AI execs seem for dealing with the potential risks.

“In the film we go to the CEOs, because they’re the guys building this,” co-director Charlie Tyrell said while in Austin for SXSW. “...As you know in the film, you go there and there is no plan. So that puts the onus on the users, the non-technologists, to ask what we do.”

The cast is a who’s who of the AI debate across the full spectrum of opinion. It has researchers and ethicists sounding alarms about AI, with people like Timnit Gebru, Deborah Raji, Yoshua Bengio, Karen Hao, and Emily Bender. It also includes AI execs Sam Altman, Anthropic’s Dario and Daniela Amodei, Demis Hassabis, and Reid Hoffman.

One prominent voice in “The AI Doc” is Tristan Harris, co-founder of the Center for Humane Technology, who also mentioned the film in his SXSW talk earlier this month. Harris argued there’s still a lack of common knowledge about AI, but that the film helps “create global clarity about why AI is actually dangerous and what the current path looks like.”

Big Technology Friday Edition: OpenAI’s Superapp Ambitions, Jensen on Jobs, Bezos’s $100 Billion Automation Fund

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) OpenAI leadership says no more side quests 2) The company is focusing on enterprise and coding 3) Does this mean consumer AI is dead? 4) OpenAI’s new focus era 5) Why OpenAI is building a Superapp 6) OpenAI partners with the consultants 7) Most first time AI buyers are choosing Anthropic 8) Nvidia CEO Jensen Huang says those who use AI to cut jobs lack imagination 9) The Metaverse is dead, or is it? 10) Jeff Bezos is raising $100 billion to automate industrial work 11) Do you dry chat?

You can listen on Apple Podcasts, Spotify, or your podcast app of choice

Thanks again for reading. Please share Big Technology if you like it!

Share

Join Big Technology’s Private Discord Server!

We’ll talk about this story, the latest in AI, the week’s podcast, and plenty more in our private Discord server. You can sign up via the link below:

This post is for paid subscribers

Already a paid subscriber? Sign in
Marty Swant's avatar
A guest post by
Marty Swant
I'm a journalist covering tech/marketing/policy for NYT, Fast Company, Inc., Big Technology, Transformer, and more. (Previously was on staff at Digiday, Forbes, Adweek, and the AP.) Born in MN, I’m now in NYC with my wife Emily & our dog, Willoughby.
Subscribe to Marty
© 2026 Alex Kantrowitz · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture