Scale AI’s Alexandr Wang’s Predictions for AI In 2025: Geopolitical Turf Wars Escalate, AI Agents Progress, Data Beats Compute
The Scale AI CEO shares his three predictions for where the industry is headed and whether the sector is about to hit a “data wall.”
Alexandr Wang has a few bold predictions for AI’s direction in 2025. The 27-year-old Scale AI founder, son of two Los Alamos nuclear researchers, has built a $14 billion company by anticipating what’s coming next. He started labeling data for self-driving vehicles, moved into creating data for large language models, and now offers AI application building services to businesses and the U.S. government.
In 2025, Wang anticipates intensifying geopolitical turf wars as the U.S. and China both attempt to get the world to adopt their LLMs. He sees agents beginning to find everyday uses. And he’s confident scaling up AI models will depend more on data than compute.
I sat down with Wang on this week’s Big Technology Podcast to discuss, and interrogate, his predictions. You can read the Q&A below, edited lightly for length an clarity, or listen to the full podcast on Apple Podcasts, Spotify, or your app of choice.
Alex Kantrowitz: Let’s kick off with the prediction I find most interesting, which is that you see geopolitical shifts coming in the next year in the world of AI. What’s going to happen?
One of AI’s big questions has always been the U.S. versus China arms race. Certainly it's been a pretty tight race at various points over the past decade. With autonomous vehicles, it was very close. With military use cases of AI, it’s very close. And then with generative AI and large language models. That's, once again, quite close. I do expect the new administration will come in and help accelerate things, to enable the U.S. to compete more aggressively with China and ultimately come out ahead.
But my prediction really is that we're going to be talking a lot more not only about which of the two superpowers wins, but which one has AI systems that are going to be adoptable and exportable worldwide. So, which country is going to have the AI technology that becomes sort of the infrastructure and the foundation of the world's AI systems?
Most of the globe is caught in the middle between the US and China. And there's always these questions where you have to pick a side when it comes to which technology you're going to rely on. And so there are many countries – we like to call these “geopolitical swing states” – which could go either way. They could go to Western and US technologies, or they could go to Chinese technologies.
I think one of the best examples of this was in the past year, the Biden administration posed to the UAE, “Hey, which way are you going to go in terms of AI technology? You could either go into the sort of Huawei-China stack, or you could go into the Microsoft-United States technology stack for AI,” and they ultimately pick the U.S. stack. But I think this is going to be one of the under-the-line battles that really defines the course of the next few decades of geopolitics. I don't think we can really afford another Chinese expansionary expedition like the Belt and Road Initiative or Huawei technology being exported very broadly. We need to ensure that Western AI technology is dominant globally.
So basically, what you're positing is that there's a series of AI models that U.S>. companies like OpenAI, Google, Amazon, Meta, are building, and then there's a series of models that Chinese companies like Huawei are building, and they're going to be in competition with each other in the globe. And it's important that the US wins, or the Western version wins (because we also have Mistral in France). Why is that important?
There's two sides to this. I think first there's the tactical question of which one is more powerful, U.S. AI versus Chinese AI. And this is very relevant for national security. I mean, if you believe that there's some potential of some kind of conflict over Taiwan or other, some kind of other, like hot conflict between the U.S. and China, the United States needs to ensure that we have the best possible AI technology to ensure that we would prevail in any kind of hot conflict, that democracy would prevail, and that ultimately, that we're able to continue ensuring our way of life.
Having the better ChatGPT isn't going to make you victorious in a conflict over Taiwan.
Certainly it will not be the only factor, but the history of war is a history of military technology time and time again. You see when there's new technologies and new technological paradigms that come to warfare, it has the ability to fundamentally shift the tides. We saw that, most recently in Ukraine, with drone warfare becoming all of a sudden the major paradigm. By the way, I think drone warfare in Ukraine is becoming more and more enhanced by generative AI and more advanced autonomy. So that's definitely one thread that is continuing,
Before you move on, where would you say the U.S. and China are in terms of competitiveness on AI technology? Especially in terms of the way that they apply it in war?
So if you look at just the raw technology. The US is ahead, but China is moving fast. We like to break it down across three dimensions. It boils down to algorithms, computational power and data. So algorithms are – the kinds that folks at OpenAI or Google or other companies build. Computational power – comes down to chips and GPUs – the kind that NVIDIA produces out of TSMC factories or TSMC fabs in Taiwan. And then lastly is data, which is maybe the least focused on of the three pillars, but certainly just as important for the performance of these AI systems.
If we were to rack and stack versus China, we're ahead on algorithms. We're ahead on computational power, thankfully, due to a lot of the export controls that the Commerce Department has put in place, and then on data, it's a little bit of a jump ball. You know, the conventional wisdom is that China is actually probably going to be ahead on data in the long run because they don't care as much about the sort of personal liberties and, you know, protecting personal data in the same way that we do in the West. And so right now, the U.S. is ahead. That being said, the sort of deployment of AI to military is hard to track exactly. The PLA (People's Liberation Army) doesn't tell us exactly what they're doing. They don't tell us exactly what they're up to. But I certainly am worried that they're moving faster than we are in the U.S.
And this has been the sort of pre-existing precedent when it comes to China's use of AI technology for national security or military use cases. So the best example is in the past decade, they rolled out facial recognition technology widespread across the whole country for things like Uyghurs supression or global surveillance of their citizen base. And they did that incredibly quickly, much faster than any comparable technology scale up in the United States. So my expectation is that they will actually deploy AI to their military faster than the U.S., even though the U.S. is ahead on the tech, core technology.
So, it's important for the AI industries to be stronger, especially as this stuff gets put into production on the battlefield with things like drones and computer vision applied on top of satellite imagery…
And there’s a more subtle point, which is that it actually, not only does it matter for hot conflict, for war, et cetera, it also matters just in terms of, okay, which technology becomes the commercially or, economically speaking, the global standard. Because in the US, we benefit as a country from being the global standard in a number of areas. You know, we are the global standard for currency. That is something that's incredibly beneficial to our economy and to everything that we do. You know, certainly our search. So Google and a lot of our technology companies are the global standards. So for search and for social media, many of these are sort of like global standards. We benefit a lot from these being the global standards.
And I think when it comes to AI, you know, it's a very interesting technology, because not only is it a sort of technological utility, but it's also a cultural technology. Ultimately, if a lot of people on the globe are talking to AIs to understand what to think or or how to feel about certain things, then ensuring that the AI substrate that gets exported around the world is one that is democratic in nature, that it sort of believes in the ideas of free speech and open conversation about whatever topic is necessary. That's a really powerful cultural export that we can have from the United States that will, over time, I think, fulfill a lot of America's vision of ensuring that we have freedom and liberty for all. So I think it's one of these things that is unbelievably important, even beyond the sort of hot military implications, it's one that's culturally important just for ensuring that the United States is able to export our ideals.
So you're saying there's a soft power issue here as well?
Yes, exactly.
I want to ask you about China's development of AI, because I always hear two contradictory things about how China's progressing with AI. The first is that they have the government that's willing to put all the resources that they can into building the compute power to train and run models. And then you look at what's actually going on on the ground, which is that, and you can correct me if I'm wrong, right now, China is using a lot of American models, open source models, Meta’s Llama model. So explain this one to me. How has China been able to effectively put all these resources toward the problem, but still has to rely on American open source technology to build the things that they want to build?
Well, there's two major things. One undeniable trend over the past, let's call it five years, has been the collapse of the Chinese startup sector. And this is really driven by policies from the CCP to significantly — they killed certain startup industries and they really hampered the entire innovation ecosystem. And you see it in the numbers, this amount of capital flowing into the Chinese innovation ecosystem has fallen off a cliff pretty precipitously.
Why did they do that? Was it that the tech industry was growing so large it threatened the government?
That was the fundamental risk. I think that if the government, if the CCP, has a desire to ensure that they consolidate all the power, either they have to nationalize the tech firms, or they have to ensure that they stay weak.
A lot of this hinges on the fact that they do really see the world differently from the way that we do. In the West, it seems totally insane. But in certain doctrines, or with certain ideals, I think it can make total sense, right? A lot of what they have to do in AI is just catch up and copy what we've been up to, which they have been pretty successful at.
So, for example OpenAI released o1 and released the o1 preview a number of months ago.This is open AI's advanced reasoning model, which is great at scientific reasoning and mathematical reasoning and code, etc. And the very first replication of that model – of that paradigm of model – actually came out of China from a lab called Deep Seek. The Deep Seek R1 model. So they certainly are extremely good at catching up.
Now, there is a very real hamper in a lot of their progress too, which is the chip export controls. And this has been an incredible effort, I think, from the U.S. Department of Commerce and the Biden administration in general, to hamper the ability of the Chinese AI ecosystem to build foundation models of the similar size, scale and magnitude as the ones we have in the US, because they have not been able to get access to the cutting edge NVIDIA GPUs that we'd have in the States. And sowhether or not you think that's good or bad policy, it has hampered the progress of Chinese AI development, which enables us to stay ahead.
Let's circle back to your prediction that the U.S. and China will be head to head, trying to get their vision for AI adopted across the globe. Who do you think is going to win there?
I think that the trend right now is currently very positively in the direction of the United States or of the West. Broadly speaking, we have the most powerful models. We also have the most compelling value proposition. Our models are going to keep getting better, and yes, maybe the Chinese ones catch up over time. But we are the innovation ecosystem. We are going to be the ones who innovate far ahead of the adversaries.
That being said, I think there’s a flip side. I think you have to look at what the total package the CCP or China might be able to offer — in the Belt and Road Initiative, it was through this sort of “total package” of technology, plus infrastructure build outs, plus debt that managed to move a lot of folks over to their side. I think we need to watch it closely to make sure that we always have a compelling total value proposition.
One sort of sub-prediction that I have, which is important to mention here, is that the technology is moving so quickly that I do think that 2025, will be the year where we start to see several militaries around the world start utilizing AI agents in active war fighting environments to great effect. I think you're going to start seeing this in some of the hot wars that we have going, as well as some sort of militaries, advanced militaries – who aren't at war – start utilizing AI agents. And so I think that the temperature on AI deployment to the military is going to go up pretty dramatically over the course of the next year.
I just wrote a post on Big Technology about how AI is going to be an enterprise thing for a while. B2B companies, software companies – not exactly the most exciting stuff in the tech world – are going to be where this stuff is adopted because it solves a problem for them: they have loads of information, they can't organize it, they can't share it, they can't act on it. And generative AI in particular, is quite good at handling that. We don't have an ‘AI phone’ right now, but we have plenty of companies working in AI software, and the military is the perfect example of where it could apply because of all of the information and logistics issues.
You're hitting on the core point, which I think is often glossed over. When people think about the military and war, they often think about the literal battlefield and the sort of actions on top of the battlefield. But 80% of the effort that goes into any war is all of the logistical coordination that goes into the manufacturing of weapons, or the manufacturing of various supplies, or the logistics and sort of delivery of all the supplies to to a battlefield, the decision making process, the sort of data processing of all the information that's coming in. And so most of what happens actually looks, to your point, a lot like in enterprise, the stakes are just dramatically higher.
How do AI agents help in that case?
There's probably two core areas where I think agents are going to have immediate value. One is in – to reference your point on enterprises – processing huge amounts of data. Right now, most militaries already have more information coming in the door than they have the ability to process. They have terabytes and terabytes of data that that come in, whether it's data from the battlefield, data from their partners and allies, data from satellite networks, data from other collection formats, and they need to process that into insight that actually can help them make real decisions about what they should be doing differently.
So, the first is this huge problem of massive data-ingest into real decision making. That’s a sort of general problem set that fits a lot of sub areas, whether it's in logistics or intelligence or military operation planning or whatever it might be. The second area where I see it having a very, very real impact is just in fundamentally coordination and optimization of complex systems. And this is really where the logistics or the manufacturing cases are very clear, where these are incredibly complex processes with lots of moving parts, and it's hard for humans to get their hands around those processes and really optimize them effectively. Whereas AI systems can ingest far more information about the processes, can run simulations on their own around what are various configurations that might operate better, and they can sort of self optimize those processes to perform better.
And then there's, I think, the sort of third area, which is more speculative – or “sci fi” – which is the use of AI agents more actively in drone autonomy, or a lot of the autonomous missions that are being run right now. I think this is an area of active experimentation for a lot of militaries, but I think if you start to see that happen, then you will have more autonomous drones that are able to be more and more lethal – more and more effective – and that's going to be a cat and mouse game in and of itself, a real race.
That scares the shit out of me. Are you comfortable with that?
No. I think ultimately, we're going to need to have global conversations and global coordination around what degree we actually want a lot of this, a lot of AI agents, to be used actively on the battlefield. That being said, there are hot wars going on right now where militaries and countries are desperate, and I think they'll do whatever they need to in the near term to get the leg up.
I want to get into your second prediction. We already have brought up AI agents, but I think we should go a little bit deeper. People hear about AI agents, and they say, ‘Is that supposed to be something on my computer that's going to book me travel? Book me tables at restaurants? Look things up for me? Do my expense reports?’ We haven't really seen those yet. But you think that’s going to change?
Yeah, I do. I think that 2025, is really going to be the year where we start to see some kind of very basic, primordial AI agents really start working in the consumer realm and creating a sort of real consumer adoption.
Another way that I think about this is, we'll see something like a “ChatGPT moment” in 2025 for AI agents. You'll see a product that starts resonating, even though – to technologists – it may not seem like all that, or may not seem like that big of a leap relative to what we had before. And I think a lot of that is going to come from probably two main threads.
First, obviously, models are continuing to improve and getting more reliable. The second is really evolving in the user interface and experience of what an agent does. Right now, we're so stuck as a tech industry still on this sort of “chat paradigm” and having everything be a chat with one of these models. And I think that's a restrictive paradigm to enable agents to actually start working. To me, what it really means for an agent to start “working” is when a user – or consumers in general – starts actually outsourcing some real workflows to the agent that they would have had to do otherwise. And so we'll start to fully trust the agent to do full end-to-end workflows.
Maybe it will be something around travel. Maybe it will be something around calendaring. Maybe it will be something around producing presentations, or managing your workflow. But we'll start to really offload some of the meaningful chunks of our work to the agents, and there will be something that really starts to take off.
I don't know if it's going to be one of the big labs. It'll be a new startup that comes up with it, because I think so much of it will come from kind of like experimenting and the natural innovation ecosystem working out. But you know, what we see is that the models and their capabilities are certainly strong enough to enable a pretty incredible experience. There's all this talk about whether or not we're hitting a wall. But the models are really, really powerful and we should see something big here.
Okay, so just walk me through what that experience might look like. Since you've imagined the idea that AI agents could end up helping us in 2025, what are some experiences that are in the realm of “feasible?”
So first, let's walk through what's an “ideal” AI agent. An ideal AI agent is one that I think is observing and naturally in all the sort of core flows of information and core flows of context that you are in digitally: It’s in all your Slack threads. It's in all your email threads. It reads your JIRA, or all of your tools, to understand everything that's going on in your work life. And then it helps to organize all that information to start taking certain actions.
One agent that I think would be super beneficial, and one that I think is in the realm of feasible, is something that starts to take a hand at responding to a lot of your emails and flagging when it needs you for additional context or information to be able to address your emails. It can sort of summarize a lot of your emails for you naturally. And so something that just turns the experience of doing email from having to respond piece-by-piece to every single email, to leveling you up to being, ‘Hey, this is all of the overall work streams and workflows,’ and ‘How do you want to engage at a high level on top of those workflows?’
This is a business use case, and I'm curious how everyday people might end up using AI agents, or is that just still a ways off?
In everyone's personal lives, you're also juggling and navigating a whole set of various priorities. You know, I'm planning a trip with my friends over here, and I'm, I need to, you know, get gifts for my family, and figure out what they want for Christmas. And then I need to, I have all of these sorts of personal projects which are still sort of sitting there. And so I think, in the same way, helping you sort of like, level up on top of all of the projects that you're navigating and sort of like help you sort of coordinate between all of them more naturally. I think that's something that we're going to start seeing.
Now, I don't know the perfect way that that happens, right? I don't think that the project experience is so important as a part of this and having a product experience where you don't expect it to be perfect, but you expect it to be pretty good. I think that's like 99% of the challenge, and that's why we haven't seen it yet, despite the fact that the models already can do a lot of this stuff pretty well.
My 2025 prediction is that guys use AI agents to use dating apps for them.
Hopefully they’ll be good dates. What are you seeing? I know you had Benioff on the podcast a little bit ago. What are you seeing as the things that seem to make sense from an AI agents perspective?
Benioff, when he came on, talked pretty convincingly that we'll have AI agents at work.
But if we have agents that work on our behalf on the internet, like travel sites, dating sites, social media sites, I'm very curious whether they're going to come up against these bot protection systems. Are they going to do CAPTCHAs on our behalf? Are they going to get the text messages and fill in those numbers so they're able to log into different systems? Because the whole internet has been built to defend against these things.
We will have to sort of fundamentally reformat how the internet works to be able to support it. And I think that in some senses there will be two webs. There will be the web that humans use when they need to navigate stuff on their own, and then there will be the web that agents use.