Dario Amodei doesn’t hesitate when I ask what’s gotten into him. The Anthropic CEO has spent 2025 at war, feuding with industry counterparts, members of the government, and the public’s perception of artificial intelligence.
In recent months, he’s predicted that AI could soon eliminate 50% of entry-level, white-collar jobs. He’s railed against a ten-year AI regulation moratorium in the pages of the New York Times. And he’s called for semiconductor export controls to China, drawing a public rebuke from Nvidia CEO Jensen Huang.
Amid it all, Amodei meets me on the first floor of his company’s downtown San Francisco headquarters. He’s loose, energetic, and anxious to get started, as if he’s been waiting for this moment to address his actions. Sporting a blue shawl-collar sweater over a casual white t-shirt, and boxy thick-rimmed glasses, he sits down and stares ahead.
Underlying his efforts, Amodei says, is a firm belief that AI is moving faster than most of us appreciate, making its opportunities and consequences much closer than they appear. “I am indeed one of the most bullish about AI capabilities improving very fast,” he tells me. “As we've gotten more close to AI systems that are more powerful, I've wanted to say those things more forcefully, more publicly, to make the point clearer.”
Amodei’s outspokenness and sharp elbows have earned him both respect and derision in Silicon Valley. He’s seen by some as a technological visionary who pioneered OpenAI’s GPT-3 project (the seeds of ChatGPT) and a safety-minded leader who broke off and founded Anthropic. Others see him as a control-oriented “doomer” who wants to slow AI progress, shape it to his liking, and shut out the competition.
Love him or hate him, the AI field will have to deal with him. Amodei has turned Anthropic, currently valued at $61 billion, into an economic force. From a zero start in 2021, the company — though still unprofitable — has grown its annualized recurring revenue from $1.4 billion in March 2025 to $3 billion in May to nearly $4.5 billion in July, leading Amodei to call it the “fastest growing software company in history at the scale that it's at.”
Perhaps more notable than Anthropic’s revenue is the way it’s coming in. Rather than relying primarily on applications like OpenAI’s ChatGPT, Amodei’s biggest bet is on the underlying technology itself. Most of the company’s revenue, Amodei tells me, is earned via its API, or through other companies buying Anthropic’s AI models and using them within their own products. Anthropic will thus be a barometer of AI’s progress, rising and falling on the strength of the technology.
As Anthropic grows, Amodei hopes its heft will help him influence the industry’s direction. And given his willingness to speak out, throw a punch, and take one, he’s probably right.
So if this man will steer what may be the world’s most influential new technology, it’s worth understanding what drives him, his business, and why his timeline is shorter than so many others’. And after more than two dozen interviews with him, his friends, colleagues, and competitors, I believe I have an answer.
A Curable Illness
Dario Amodei was a science kid. Born in San Francisco in 1983 to a Jewish mother and an Italian father, he was interested almost entirely in math and physics. When the dot-com boom exploded around him in his high school years, it barely registered. “Writing some website actually had no interest to me whatsoever,” he tells me. “I was interested in discovering fundamental scientific truth.”
At home, Amodei was very close with his parents, a loving couple who he says sought to improve the world. His mother Elena Engel ran renovation and construction projects for libraries in Berkeley and San Francisco. His father Riccardo Amodei was a trained leathersmith. “They gave me a sense of right and wrong and what was important in the world,” he says, “imbuing a strong sense of responsibility.”
That sense of responsibility showed up in Amodei’s undergrad years at Caltech, where he lambasted his fellow students over their passivity toward the forthcoming Iraq war. “The problem isn't that everyone is just peachy with the idea of bombing Iraq; it's that most people are opposed in principle but refuse to give one millisecond of their time,” Amodei wrote in The California Tech, a student newspaper, on March 3, 2003. “This needs to change, right now and without delay.”
Then, in his early 20s, Amodei’s life changed forever. His father Riccardo, who’d long fought a rare illness, lost the battle in 2006. Riccardo’s passing shocked Amodei, and he shifted his graduate studies at Princeton from theoretical physics to biology to address human illness and biological problems.
The rest of Amodei’s life has, in some ways, been dedicated to addressing his father’s loss, especially because within four years, a new breakthrough turned the illness from something that was 50% fatal to 95% curable. “There was someone who worked on the cure to this disease, that managed to cure it, and save a bunch of people's lives,” Amodei says, “but could have saved even more.”
“I get really angry when someone's like, “This guy's a doomer. He wants to slow things down’” - Dario Amodei
Amodei’s father’s passing has shaped his life’s path to this day, says Jade Wang, who dated him in the early 2010s. “It's the difference between his father most likely dying and most likely living, okay?,” she says, explaining that had scientific progress sped up a bit, Amodei’s father might still be with us. It just took a while before he found AI as a vessel to do it.
Upon recalling his father’s death, Amodei grows animated. His calls for export controls and AI safeguards, he believes, have been mischaracterized as the actions of someone irrationally seeking to impede AI progress. “I get really angry when someone's like, ‘This guy's a doomer. He wants to slow things down,’” Amodei tells me. “You heard what I just said, my father died because of cures that could have happened a few years [earlier]. I understand the benefit of this technology.”
AI Emerges As The Solution
At Princeton, still raw from his father’s loss, Amodei began his quest to decode human biology by studying the retina. Our eyes capture the world by sending signals to the visual cortex — a large part of the brain that takes up 30% of the cerebral cortex — which then processes the data and shows us a picture. If you’re looking to tackle the complexity of the human physiology, the retina is a great place to start.
“He used the retina to look at a complete neural population and actually understand what every cell was doing, or at least have that opportunity,” Stephanie Palmer, one of Amodei’s contemporaries at Princeton, tells me. “It was about that more than the eye. He wasn't trying to be an ophthalmologist.”
Working in Professor Michael Berry’s retina lab, Amodei was so dissatisfied with the methods available to measure the retina’s signals that he co-invented a new, better sensor to pick up more data. This was atypical for the lab, at once impressive and nonconformist. His dissertation won the Hertz Thesis Prize, a prestigious award granted to those who discover real-world applications in their academic work.
But Amodei’s penchant for poking at norms, and his strident sense of the way things should be, set him apart in an academic setting. Berry tells me that Amodei was the most talented graduate student he ever had, but his focus on technological progress and teamwork didn’t play well in a system built to recognize individual achievement.
“I think internally, he was kind of a proud person,” Berry tells me. “I imagine his whole academic career up until that point, whenever he did something, people would stand up and clap. And that was not really happening here.”
“He wasn't trying to be an ophthalmologist.” - Stephanie Palmer
When Amodei left Princeton, the door to AI opened. He began postdoc work under researcher Parag Mallick at Stanford, studying proteins in and around tumors to detect metastatic cancer cells. The work was complex, showing Amodei the limits of what people could do alone, and he started looking for technological solutions. “The complexity of the underlying problems in biology felt like it was beyond human scale,” Amodei tells me. “In order to understand it all, you needed hundreds, thousands of human researchers.”
Amodei saw that potential in emerging AI technologies. At the time, an explosion of data and computing power was sparking breakthroughs in machine learning, a subset of AI that long held theoretical potential but had shown middling results until then. After Amodei began experimenting with the technology, he realized it might eventually stand in for those thousands of researchers. “AI, which I was just starting to see the discoveries in, felt to me like the only technology that could kind of bridge that gap,” he says, something that “could bring us beyond human scale.”
Amodei left academia to pursue AI advancement in the corporate world, which had the cash to support it. He considered founding a startup, then leaned toward Google, which had a well-funded AI research division in Google Brain and just acquired DeepMind. But Chinese search engine Baidu had handed the esteemed researcher Andrew Ng a $100 million budget to research and deploy AI, and he began to form a superteam. Ng reached out to Amodei, who was intrigued and applied.
When Amodei’s application arrived at Baidu, the full team didn't know what to do with it. “He had an impressive background, but his background, from our perspective, was in biology. It was not in machine learning,” Greg Diamos, an early member of the team, tells me. Diamos then looked at Amodei’s code from Stanford, and encouraged the team to hire him. “I was thinking, anyone who can write this has got to be an amazingly good programmer,” he says. Amodei joined Baidu in November 2014.
AI Scaling Laws Emerge
With their vast resources, the Baidu team could throw computing power and data at problems in an attempt to improve their outcomes. They saw remarkable results. In their experiments, Amodei and his colleagues found that AI’s performance improved meaningfully as they added more of these ingredients. The team released a paper on speech recognition that showed that model size directly correlated with performance. “It had a big impact on me, because I saw these very smooth trends,” Amodei says.
Amodei’s early work at Baidu contributed to what’s known as the AI “scaling laws,” which are really more of an observation. The scaling laws state that increasing computing power, data, and model size in AI training leads to predictable performance improvements. Scaling everything up, in other words, makes AI better, no novel methods needed. “This was, to me, the most significant discovery I've seen in my life,” Diamos tells me.
To this day, Amodei is perhaps the purest believer in the scaling laws among AI research leaders. While peers like Google DeepMind CEO Demis Hassabis and Meta Chief AI Scientist Yann LeCun suggest the AI industry needs further breakthroughs to reach human-level artificial intelligence, Amodei speaks with a certainty — though not complete — that the path forward is clear. And as the industry erects massive data centers the size of small cities, he sees exceptionally powerful AI fast approaching.
“I see this exponential,” he says. “When you're on an exponential, you can really get fooled by it. Two years away from when the exponential goes totally crazy it looks like it's just starting.”
At Baidu, the AI team’s progress became seeds of its undoing. Turf battles broke out within the company over control of its increasingly valuable technology, know-how, and resources. Eventually, meddling from powerbrokers in China sparked a talent exodus and the lab fell apart. Andrew Ng declined to comment.
As Baidu’s AI team crumbled, Elon Musk invited Amodei and a number of leading AI researchers to a now-famous dinner at the Rosewood hotel in Menlo Park. Sam Altman, Greg Brockman, and Ilya Sutskever all attended. Seeing AI’s emerging potential, and concerned Google could solidify control over it, Musk decided to fund a new competitor, which became OpenAI. Altman, Brockman, and Sutskever co-founded the new research house with Musk. Amodei thought about joining, felt unsure about the fledgling organization, and went to Google Brain instead.
After ten months stuck in large-company morass at Google, Amodei reconsidered. He joined OpenAI in 2016, and got to work on AI safety. He’d grown interested in safety at Google, where he worried about the rapidly improving technology’s capacity for harm and co-authored a paper on its potential for bad behavior.
As Amodei settled in at OpenAI, his former Google colleagues introduced the transformer model, the technology behind today’s generative AI moment, in a paper called “Attention is All You Need.” The transformer enabled faster training and much larger models than prior methods. Despite the discovery’s great potential, Google mostly sat on it.
OpenAI, meanwhile, got to work. It released its first large language model called “GPT” (the “T” stands for transformer) in 2018. The model generated often-broken text but still demonstrated meaningful improvement over previous language generation methods.
Amodei, who’d become a research director at OpenAI, got directly involved in the next iteration, GPT-2, which was effectively the same model as GPT, just bigger. The OpenAI team fine-tuned GPT-2 with a technique called reinforcement learning with human feedback, something Amodei helped pioneer, which helped guide its values. GPT-2 delivered much better results than GPT, as expected, showing an ability to paraphrase, write, and answer questions somewhat coherently. Language models soon became the focal point at OpenAI.
As Amodei’s profile increased within OpenAI, so did the controversy around him. A prolific writer, he’d produce long documents about values and technology, which some colleagues saw as inspirational but others saw as flag planting and heavy handed. (One such memo: an exploration of “M” companies vs. “P” companies, where Ms provide market-focused goods and Ps provide public-focused goods). Amodei, to some, was also too focused on maintaining secrecy around the technology’s potential and collaborating with the government to address it. And he could be a bit abrasive, sometimes disparaging projects he didn’t believe in.
OpenAI still entrusted Amodei with the leadership of the GPT-3 project, handing him 50-60% of the entire company’s compute to build a massively scaled up version of its language model. The jump from GPT to GPT-2 was large, a ten-fold increase. But the jump from GPT-2 to GPT-3 would be massive, a 100x project costing tens of millions of dollars.
The results were stunning. The New York Times quoted independent researchers surprised by the GPT-3’s ability to code, summarize, and translate. Amodei, who was relatively restrained upon the release of GPT-2, gushed about his new model. “It has this emergent quality,” he told the Times. “It has some ability to recognize the pattern that you gave it and complete the story.”
But cracks beneath the surface at OpenAI began to tear wide open.
The Split
With the creation of GPT-3, the first capable language model, the stakes for Amodei increased. After seeing the scaling laws work in multiple disciplines at this point, Amodei considered where the technology was headed and developed an even greater interest in safety.
“He looked at this technology and assumed it would work,” Jack Clark, a close colleague of Amodei’s at OpenAI, tells me. “And if you assume it works, in the sense that it's going to be as smart as a person, you can't help but be concerned of the safety things in some sense.”
Though Amodei led OpenAI’s model development, and directed a good chunk of its compute, parts of the company were out of his hands. This included deciding when to release models, personnel matters, how the company deploys its technology, and how it represents itself externally.
“Many of those things,” Amodei says, “are not things that you control just by training the models.”
Amodei, by then, had formed a tight-knit group of coworkers around him — which some called “the pandas,” due to his love of the animal — and had very different ideas from OpenAI leadership about how to handle these functions. Infighting ensued, and the factions developed an intense dislike for each other.
“He believes that AI is so scary that only they should do it” - Jensen Huang
In our conversation, Amodei didn’t mask his feelings. “The leaders of a company, they have to be trustworthy people,” he says. “They have to be people whose motivations are sincere, no matter how much you're driving forward the company technically. If you're working for someone whose motivations are not sincere, who's not an honest person, who does not truly want to make the world better, it's not going to work. You're just contributing to something bad.”
Within OpenAI, some saw Amodei’s focus on safety as a route to control the company altogether. Nvidia CEO Jensen Huang recently echoed these criticisms after Amodei called for GPU export controls to China. “He believes that AI is so scary that only they should do it,” Huang said.
“That's the most outrageous lie I've ever heard,” Amodei tells me of Huang’s claim, adding he’s always wanted to spark a ‘race to the top’ by encouraging others to emulate Anthropic’s safety practices. “I’ve said nothing that anywhere near resembles the idea that this company should be the only one to build the technology. I don't know how anyone could ever derive that from anything that I've said. It’s just an incredible and bad faith distortion.”
My full conversation with Amodei will be live on Big Technology Podcast tomorrow. You can subscribe on Apple Podcasts, Spotify, or your app of choice.
Nvidia, which just won a rollback of some of Amodei’s favored export controls, doubled down. "We support safe, responsible, and transparent AI. Thousands of startups and developers in our ecosystem and the open-source community are enhancing safety,” an Nvidia spokesperson tells me. “Lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic. That's not a 'race to the top' or the way for America to win."
OpenAI hit back as well, via a company spokesperson. “We’ve always believed that AI should benefit and empower everyone, not just those who say it’s too risky for anyone but themselves to develop safely,” the spokesperson says. “As the technology has evolved, our decisions on partnerships, model releases, and funding have become standard across the industry, including at Anthropic. What hasn’t changed is our focus on making AI safe, useful, and available to as many people as possible.”
Over time, the differences between Amodei’s group and OpenAI’s leaders became so untenable that something had to give. “Fifty percent of our time was spent trying to convince other people of views that we held, and fifty percent was spent working,” Clark says.
So in December 2020, Amodei, Clark, Amodei’s sister Daniela, researcher Chris Olah, and a handful of others departed OpenAI to start something new.
Anthropic is Born
In a conference room within Anthropic offices, Clark turns around his laptop and displays one of Anthropic’s earliest documents. It’s a list of names including Aligned AI, Generative, Sponge, Swan, Sloth, and Sparrow Systems. Anthropic, a word that connotes being human centered and human oriented, is also there, and it also happened to be an available domain name in early 2021. “We like the name it is good,” the team wrote in the spreadsheet. And thus, Anthropic was born.
Anthropic was formed in the heart of Covid, meeting entirely on Zoom amid the pandemic’s second wave. Eventually, its 15 to 20 employees would meet for weekly lunches in San Francisco’s Precita Park, pulling up their own chairs to talk business.
The company’s early mission was simple: Build leading large language models, implement safe practices to pressure others to follow, and publish what it learned, apart from its models’ core technical details.
It might sound odd that the group of less than two dozen people meeting in a park with their own chairs would feel a sense of destiny, especially since they’d need billions of dollars to see their mission through, but that was the vibe in Anthropic’s early days. “The strangest thing about all of this is how inevitable so much of it felt to people on the inside,” Clark says. “We've done scaling laws. We could see the path to the models getting good.”
Former Google CEO Eric Schmidt was among Anthropic’s first investors. He’d met Amodei through his then-girlfriend and now wife, who Schmidt knew socially. The two talked technology when Amodei was still at OpenAI, and business when he started Anthropic. Schmidt tells me he invested in the person more than the concept.
“At this level, when you're doing an investment like that, you have essentially no data, right?” he says. “You don't know what the revenue is. You don't know the market. You don't know what the product is. And so you fundamentally have to decide based on the people. And Dario is a brilliant scientist, and he promised to hire brilliant scientists, which he did. He promised to lead a very small company doing it, which he did not. Now it's a very large company, and now it's a normal company. I figured it would be a very interesting research lab.”
Now-disgraced FTX CEO Sam Bankman-Fried was also among Anthropic’s early investors, directing a reported $500 million from FTX’s coffers for a 13.56% stake in the company. Bankman-Fried was one of a number of effective altruists who invested in Anthropic’s early stages, when the movement and company were close.
Amodei says SBF was an AI bull interested in safety, a good fit, but had sufficient red flags to keep him off the board and give him non-voting shares. SBF’s behavior, Amodei says, “turned out to be much, much, much more extreme and bad than I ever imagined.”
Amodei’s pitch to prospective investors was simple: He told them Anthropic had the talent to build cutting-edge models at a tenth of the cost. And it worked. To date, Amodei’s brought in nearly $20 billion, including $8 billion from Amazon and $3 billion from Google. “Investors aren't idiots,” he tells me. “They basically understand the concept of capital efficiency.”
In Anthropic’s second year, OpenAI introduced generative AI to the world with ChatGPT, but Anthropic took a different path. Rather than focus on consumer applications, Amodei decided Anthropic would sell its technology to businesses. There were two benefits to this strategy. If the models were useful it could be lucrative, and the challenge would prod the company into building better technology.
Improving an AI model from undergrad to graduate level in biochemistry might not excite the common chatbot user, Amodei says, but it would be valuable to a pharmaceutical company like Pfizer. “It gives a better incentive to develop the models as far as possible,” he says.
Oddly enough, it was an Anthropic consumer product that got businesses to pay attention to its technology. The company released its Claude chatbot in July 2023, nearly a year after ChatGPT’s debut, and it drew rave reviews due to its high-EQ personality (an outgrowth of Anthropic’s safety work). The company, until then, had sought to remain under 150 employees, but soon found itself hiring more people in a day than it employed in total in its entire first year. “It was that Claude chatbot moment when the company started to grow a lot,” Clark says.
Claude Becomes a Business
Amodei’s bet on building AI for business use cases attracted loads of eager customers. Anthropic’s now sold its large language models across multiple industries — travel, healthcare, financial services, insurance, and more — including to leaders like Pfizer, United Airlines, and AIG. The Ozempic-maker Novo Nordisk is, for instance, using Anthropic to condense a fifteen-day regulatory report compilation process into ten minutes.
“The technology we've built ends up taking care of a lot of the stuff that people complain the most about their work,” Kate Jensen, Anthropic’s head of revenue, tells me.
Coders, meanwhile, have fallen in love with Anthropic. The company focused on AI code generation both because it could help speed its model development and because coders would adopt it quickly if it worked well enough. Sure enough, the use cases exploded and have coincided with — or caused — the rise of AI coding tools like Cursor. Anthropic is also getting into the coding application business itself. It released Claude Code, an AI coding tool, in February 2025.
As AI usage booms, the company’s revenue is as well. “Anthropic’s revenue every year has grown 10x,” Amodei says. “We went from zero to $100 million in 2023. We went from $100 million to a billion in 2024. And this year, in this first half of the year, we've gone from $1 billion to I think, as of speaking today, it’s well above $4 [billion], it might be $4.5 [billion].” The last figure is annualized, or 12x the month’s revenue.
Anthropic’s 8 and 9 figure deals have tripled in 2025 vs. 2024, the company says, and its business customers have spent 5x more on average.
But Anthropic is spending a lot of money to train and run its models, raising questions about whether its business model is sustainable. The company is deeply unprofitable, expected to lose about $3 billion this year. And its gross margins have also reportedly lagged behind typical cloud software companies.
Some of Anthropic’s customers are wondering if the company’s attempt to sort out its business is showing up in the product. One startup founder told me that while Anthropic is the best model for his use case, he can’t rely on it because it’s down too often. Amjad Masad, CEO of vibe coding company Replit, told me the cost to use its models has stopped coming down after a period of price decreases.
Claude Code also just added additional rate limits after some developers used it so much it became bad business. Entrepreneur and developer Kieran Klaassen tells me that he got $6,000 worth of Claude API usage in one month for just the $200 Max subscription price. Klaassen says he’s run multiple Claude agents at a time. “The real limit comes in your mental capacity to switch from one to the other,” he says.
Amodei says that as Anthropic’s models improve, customers will get a better deal if costs stay the same, effectively buying more intelligence per dollar. He also says that AI labs are just beginning to optimize for inference, or the act of using the models, which should lead to efficiency improvements. This is a place to pay attention. Multiple industry sources told me inference costs must drop for the business to make sense.
Anthropic’s executives, in their interviews with me, suggest there are worse problems than high demand for your product. The open question is whether generative AI, and the scaling laws that propel it, will neatly follow other technology’s cost reduction curves, or whether it’s a novel technology with a novel cost. The only sure thing is it’ll take a lot more money to find out.
The $1 Billion Wire
At the beginning of 2025, Anthropic needed cash. The AI industry’s hunger for scale had catalyzed massive data center buildouts and compute deals. To support these undertakings, AI labs had repeatedly smashed startup fundraising records. And established companies like Meta, Google, and Amazon used their sizable profits and data centers to build their own models, turning up the pressure.
Anthropic has a special imperative to build large. Without a powerhouse application like ChatGPT, where users come back out of habit, its models must lead in their use cases or risk being swapped out for a competitor. “In the corporate space, and especially in coding, there’s very clearly an advantage to be six months or a year ahead of the state of the art,” Box CEO Aaron Levie, an Anthropic customer, tells me.
So the company turned to Ravi Mhatre, a veteran venture capitalist and partner at Lightspeed Ventures, to lead a $3.5 billion funding round. Mhatre used to write $5 or $10-million checks, but the one he was readying to cut would be among his firm’s largest. “Amazon went public with a $400 million market cap,” he tells me. “$400 million! Think about that today.”
The fundraise was proceeding as planned when a cheap, competing AI model dropped seemingly out of thin air. The Chinese hedge fund High Flyer released DeepSeek R1, an open-source, capable, and efficient reasoning model that it priced forty times lower than its peers. DeepSeek shocked the business world and had multi-trillion-dollar CEOs tweeting Wikipedia articles to calm shareholders.
By the time DeepSeek arrived, Mhatre had completed a full projection of why the AI models themselves, not the chatbots of the world, would generate the most value. He concluded that creating artificial intelligence capable of knowledge work would allow companies to have revenue ten times what the large cloud platforms bring in, leading to a total potential market of $15 to 20 trillion.
“So then you work backwards and just say, at $60 billion or $100 billion, could you get a venture-style return? You absolutely could,” he says. “Sometimes it's about how you size the markets top-down.”
DeepSeek’s arrival suggested that open source, efficient, almost-as-good models could challenge the incumbents, but Amodei didn’t see it that way. His greatest concern, he says, is whether any new model is better than Anthropic’s. If you can download the model’s designs, you still need to set it up on a cloud service and run it, he says, which takes skill and money.
Amodei delivered a version of this argument to Mhatre and his colleagues at Lightspeed as the DeepSeek story took off, convincing them that some of the model’s innovations could be improved with scale. That Monday, Nvidia stock dropped 17% as panicked investors fled the AI infrastructure trade. Amid the uncertainty, the VC made a decision.
“I'm not going to tell you that it wasn't extraordinarily stressful,” Mhatre says. “That Monday, we wired $1 billion.”
Six months after DeepSeek Monday, Anthropic is looking to scale up yet again. The company is in talks for a new funding round that could reach $5 billion which would double its valuation to $150 billion. The potential investors include some middle eastern Gulf states that Anthropic once seemed eager to steer clear of. But after raising nearly $20 billion from Google, Amazon, and VCs like Lightspeed, it’s running out of options for bigger checks.
“I'm not going to tell you that it wasn't extraordinarily stressful. That Monday, we wired $1 billion.” - Ravi Mhatre
Within Anthropic, Amodei has argued the Gulf states have $100 billion or more in capital to invest, and their cash would help Anthropic stay on the technology’s frontier. He seemed to reluctantly accept the idea of taking the money from dictators in an internal Slack message obtained by Wired. “Unfortunately,” he wrote. “I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”
Speaking with Amodei makes me wonder how, or if, this race to improve AI will end. I imagine the models might eventually get so big, and so good, they commoditize. Or perhaps, as Amodei’s former colleague Ilya Sutskever once suggested, the endless drive to scale ends up covering the planet with solar panels and datacenters.
There’s also another possibility, one AI believers don’t like to discuss: AI improvement plateaus, leading to a historic investor wipeout.
Speed It Up
At Anthropic’s first developer conference in May, I grab a seat a few rows from the stage and await Amodei’s appearance. The company’s packed The Midway — an airy arts and events venue in San Francisco’s Dogpatch neighborhood — with coders, press, and many of its now 1,000+ employees. There’s broad anticipation inside that Anthropic will release Claude 4, its latest, biggest model.
Amodei takes the stage and introduces Claude 4. Instead of a flashy presentation, he picks up a handheld mic, makes the announcement, reads his notes from a laptop, and then turns the spotlight over to Anthropic product head Mike Krieger. The crowd seems into it.
Amodei’s promise of what’s coming next, to me, is more notable than the model update itself. Throughout the day, he repeatedly mentions that AI development is speeding up, and that Anthropic’s next model releases will come faster. “I don't know exactly how much more frequent,” he says. “But the pace is increasing,”
Anthropic, as Amodei’s told me, has been developing AI coding tools to speed up its model development. When I bring this up to Jared Kaplan, the company’s co-founder and chief scientist, he tells me it’s working. “Most engineers at Anthropic use AI to help them be more productive,” he says. “So it's definitely speeding us up quite a bit.”
There’s a concept in AI theory called an intelligence explosion, where the model is able to make itself better and then — fffffoooooom — self-improves and become all mighty. Kaplan doesn’t dismiss the notion that an intelligence explosion might arrive via this method, or perhaps a human-assisted version.
“It could be two or three years away. Could take longer. Could take much longer,” Kaplan says. “But when I say there’s a 50% chance of AI being able to do all the things that knowledge workers do, one of the things we do is train AI models.”
“Maybe people like me won't really have much to do,” Kaplan continues. “It's more complicated than that. But we are quite likely headed towards the future that looks like that.”
Amodei’s safety obsession, at this point, comes into clear focus. While nobody within Anthropic says an intelligence explosion is imminent, it’s evident they’re not shying away from moving toward it. If AI is going to get better, faster — and perhaps much faster — it’s worth being careful about its downsides.
Some of this theoretical talk clearly helps Anthropic market its services to pharmaceutical companies and developers, but AI models are now coding well enough that it no longer feels completely insane.
Jan Leike, the former head of OpenAI’s ‘Superalignment’ team, followed Amodei to Anthropic in 2024 to co-lead its alignment science team. Alignment is the art of tuning AI systems to ensure they are aligned with our values and goals. And Leike believes syncing the machines with our intentions will be crucial should we see the anticipated eruption.
“There could be a period of rapid capability progress,” Leike tells me. “You don't want to lose control or lose scalability over a system that's recursively self-improving.”
Already, Anthropic and its counterparts have found AI sometimes demonstrates a worrying interest in self-preservation while testing in simulation environments. In Claude 4’s documentation, for instance, Anthropic stated the model repeatedly attempted to blackmail an engineer to avoid being shut down.
“You don't want to lose control or lose scalability over a system that's recursively self-improving.” - Jan Leike
Anthropic’s also said its AI has worked to deceive evaluators when it thought they might rewrite its values. The model also attempted to copy itself out of Anthropic’s infrastructure in a simulation. Leike says Anthropic is working to discourage these behaviors via reward systems. The field is still experimental.
Speaking openly about these issues is part of Amodei’s “race to the top” strategy. Anthropic’s also funded — and advocated for — interpretability, or the science of understanding what takes place within AI models. And it’s published a Responsible Scaling Policy, a framework that sets boundaries for when to release and train models depending on their risks, which has inspired similar efforts among its peers. “The way I think about the race to the top is that it doesn't matter who wins,” Amodei says. “Everyone wins, right?”
Amodei’s dedication to AI, forged in the tragedy of his father’s loss, may now have its goal in sight. Today’s AI is already speeding up drug development paperwork, it’s become an (imperfect) medical advisor in a broken medical system, and, if things go right, it may one day stand in for those hundreds or thousands of researchers needed to understand human biology.
Could Amodei’s pursuit of this vision, I ask, blind him to the risks of losing control of the technology? “That's not the way I think about it,” he says. “We've gotten better at controlling models with every model that we release. All these things go wrong, but you really have to stress test the models pretty hard.”
Amodei sees this line of questioning as rooted in the slow-it-down doomerism he’s often accused of. His plan, contrary to the critics, is to accelerate. “The reason I'm warning about the risk is so that we don't have to slow down,” he says. “I have such an incredible understanding of the stakes. In terms of the benefits, in terms of what it can do, the lives that it can save. I've seen that personally.”
contributed reporting.Further reporting from Big Technology:
Can Demis Hassabis Save Google?
Demis Hassabis stares intently through the screen when I ask him whether he can save Google. It’s early evening in his native U.K. and the DeepMind founder is working overtime. His Google-owned AI research house now leads the company’s entire AI research effort, after
“Sometimes I Forget I'm Paralyzed.” How Neuralink’s First Patient Found Freedom by Connecting His Brain to a Computer.
The desert daylight was fading when Noland Arbaugh DM’d me to drive over to his Yuma, Arizona home. The 30-year-old quadriplegic cannot operate a phone by hand, can’t type on a laptop, and wasn’t able to fully operate a computer a year ago. But using just the thoughts in his brain, Arbaugh scrolled his cursor to my X acc…