Big Technology

Big Technology

Share this post

Big Technology
Big Technology
Demis Hassabis and Sergey Brin on AI Scaling, AGI Timeline, Robotics, Simulation Theory
Copy link
Facebook
Email
Notes
More

Demis Hassabis and Sergey Brin on AI Scaling, AGI Timeline, Robotics, Simulation Theory

Google co-founder Brin says “anybody who's a computer scientist should not be retired right now. They should be working on AI.”

Alex Kantrowitz's avatar
Alex Kantrowitz
May 21, 2025
∙ Paid
21

Share this post

Big Technology
Big Technology
Demis Hassabis and Sergey Brin on AI Scaling, AGI Timeline, Robotics, Simulation Theory
Copy link
Facebook
Email
Notes
More
3
Share

Google this week announced a series of impressive AI updates at its IO developer conference, including improved video generation, expanded AI Mode in search, and an advanced reasoning architecture called Deep Think.

But the news came as the AI industry reckons with questions about how much better these models can get, and whether there is a path toward artificial general intelligence (a term some would rather let die due to overuse).

To tackle these questions about the frontier of AI, I sat down with Google DeepMind CEO Demis Hassabis for an on-stage interview at Google IO on Tuesday. And just after I got mic’d up, Google co-founder Sergey Brin walked in and joined as a surprise guest. The conversation went deep into the technology and expanded into some interesting directions, including the future of the web, robotics, and simulation theory.

You can listen to my conversation with Hassabis and Brin on Apple Podcasts, Spotify, or your podcast app of choice. If you prefer reading, here’s our full discussion, edited lightly for length and clarity.

Alex Kantrowitz: Demis, given what we know today about AI frontier models, how much improvement is there left to be unlocked?

Demis Hassabis: We're seeing incredible gains with the existing techniques, pushing them to the limit. But we're also inventing new things all the time as well. And I think to get all the way to something like AGI may require one or two more new breakthroughs. We have lots of promising ideas that we're cooking up, that we hope to bring into the main branch of the Gemini branch.

There's been this debate about whether scaling up data centers solves all problems when it comes to building better AI models. When you’re working on today’s models, is scale still the star, or is it a supporting actor?

Hassabis: I've always been of the opinion you need both. You need to scale to the maximum the techniques that you know about. You want to exploit them to the limit, whether that's data or compute or scale. And at the same time, you want to spend a bunch of effort on what's coming next, maybe six months or a year down the line so you have the next innovation that might do a 10x leap in some way to intersect with the scale. So you want both. Sergey, what do you think?

Sergey Brin: I agree it takes both. You can have algorithmic improvements and simple compute improvements, better chips, more chips, more power, and bigger data centers. Historically, if you look at things like the N-body problem and simulating gravitational bodies and things like that—as you plot it, the algorithmic advances have actually beaten out the computational advances, even with Moore's law. If I had to guess, I would say the algorithmic advances are probably going to be even more significant than the computational advances, but both of them are coming up now, so we're getting the benefits of both.

Okay, but is the majority of your improvement coming from scale? There's talk about how the world will be just wallpapered with data centers. Is that your vision?

Hassabis: We're definitely going to need a lot more data centers. It still amazes me—from a scientific point of view—we turn sand into thinking machines. It's pretty incredible. But it's not just for training. Now, we've got these models that everyone wants to use and we're seeing incredible demand for Gemini 2.5 Pro, and Gemini Flash, we're really excited about how performant that is for the incredible low cost.

The whole world is going to want to use these things. We're going to need a lot of data centers for serving and also for inference-time compute. You saw, today, 2.5 Pro deep-thinking. The more time you give it, the better it will be. And certain tasks, very high value, very difficult tasks, it will be worth letting it think for a very long time. We're thinking about how to push that even further. Again, that's going to require a lot of chips and runtime.

We've been about a year into this reasoning paradigm. Demis, can you help us contextualize the magnitude of improvement we're seeing from reasoning?

Hassabis: We've always been big believers in what we're now calling the thinking paradigm. If you go back to our very early work on things like AlphaGo, AlphaZero, and our agent work on playing games, they will all have this attribute of a thinking system on top of a model. And you can quantify how much difference that makes.

If you look at a game like Chess or Go, we had versions of AlphaGo and AlphaZero with the thinking turned off – so it was just the model telling you its first idea. And it's not bad. It's maybe like master level, something like that. But then, if you turn the thinking on, it's way beyond World Champion level. It's like a 600+ ELO difference between the two versions. You can see that in games, let alone for the real world, which is way more complicated.

Of course, the challenge is that your models need to be a world model, and that's much harder than building a model of a simple game. It has errors in it, and those can compound over longer term plans. But I think we're making really good progress on all that, all those fronts.

Brin: As Demis said, DeepMind really pioneered a lot of this reinforcement learning work. And what they did with AlphaGo and AlphaZero, as you mentioned, showed something you would take 5,000 times as much training to match what you were able to do with still a lot of training and the inference time compute, that you were doing with Go. So it's obviously a huge advantage. And obviously, like most of us, we get some benefit by thinking before we speak. I was reminded to do that. The AIs, obviously, are much stronger once you add that capability. We're just at the tip of the iceberg right now. In that sense, it's been less than a year that these models have really been around.

Hassabis: An AI during its thinking process can also use a bunch of tools, or even other AIs, to improve what the final output is. It's going to be an incredibly powerful paradigm.

DeepThink is a new feature that basically runs a bunch of parallel reasoning processes and then checks them against each other and optimizes. It's like reasoning on steroids. Demis, is this one of those few advances you mentioned that might get the industry closer to AGI?

Hassabis: It's maybe part of one, shall we say? There are others too that we need to be part of improving reasoning. Where does true invention come from, where you're not just solving a mass conjecture, you're actually proposing one, or hypothesizing a new theory in physics? We don't have systems yet that can do that type of creativity. They're coming.

We need a lot of advances on the accuracy of the world models that we're building. You saw that with Veo. The potential Veo 3 has amazes me, like how it can intuit the physics of the light and the gravity. I used to work on computer games, not just the AI, but also graphics engines in my early career. I remember having to do all of this by hand and program all of the lighting, and the shaders, and all of these things. It’s incredibly complicated stuff we used to do in early games. And now it's just intuiting it within the model. It's pretty astounding.

Demis, I saw you shared an image of a frying pan with some onions, some oil. There was no subliminal messaging about that?

Hassabis: No, not really. Just a maybe a subtle, subtle message.

There's a movement within the AI world right now to stop using the term AGI. Some see it as so overused as to be meaningless. But Demis, you think it's important. Why?

Hassabis: It's very important. There's two things that are getting a little bit conflated. One is, what can a typical person do? And we're all very capable, but, however capable we are, there's only a certain slice of things that one can be an expert in. You could say, what can you do that 90% of humans can’t do? That's obviously going to be economically, very important, and from a product perspective, also very important. So it's a very important milestone. We should say that's typical human intelligence.

But what I'm interested in, and what I would call AGI, is a more theoretical construct, which is: what is the human brain, as an architecture, able to do? And the human brain is an important reference point, because it's the only evidence we have, maybe in the universe, that general intelligence is possible. You'd have to show your system was capable of doing the range of things even the best humans in history were able to do with the same brain architecture. It's not one brain but the same brain architecture: what Einstein did, what Mozart was able to do, what Marie Curie did and so on. It's clear to me today’s systems don't have that.

The hype today on AGI is sort of overblown because our systems are not consistent enough to be considered to be fully “general.” Yet they're quite general. They can do thousands of things. You've seen many impressive things today, but every one of us has experience with today's chatbots and assistants. You can easily, within a few minutes, find some obvious flaw with them: some high school math thing that it doesn't solve or some basic game it can't play. It's not very difficult to find those holes in the system, and for something to be called AGI, it would need to be consistent, much more consistent across the board than it is today. It should take a couple of months for a team of experts to find a hole in it, an obvious hole in it, whereas today, it takes an individual only minutes to find one.

Sergey, this is a good one for you. Do you think that AGI is going to be reached by one company and it's game over? Or could you see Google having AGI OpenAI having AGI. Anthropic having AGI, China having AGI?

Brin: One company or country or entity will reach AGI first. Now, it is a little bit of a spectrum. It's not a completely precise thing. It's conceivable that there will be more than one roughly in that range at the same time after that. What happens? It's very hard to foresee, but you could certainly imagine there's going to be multiple entities that come through. In our AI space, when we make a certain advance, other companies are quick to follow, and vice versa. When other companies make certain advances, it's a constant leapfrog. There's an inspiration element that you see, and that would probably encourage more and more entities to cross that threshold.

Demis, what do you think?

Hassabis: It is important for the field to agree on a definition of AGI. We should try to coalesce, assuming there is one. There probably will be some organizations that get there first. It's important that those first systems are built reliably and safely and after that, if that's the case we can imagine using them to shard off many systems that have safe architectures built underneath them. You could have personal AGIs and all sorts of things happening. As Sergey says, it's pretty difficult to predict, to see beyond the event horizon, to predict what that's going to be like.

The conventional wisdom is AGI must be knowledge. The intelligence of the brain. What about the intelligence of the heart? Demis, does AI have to have emotion to be considered AGI?

Hassabis: It’ll need to understand emotion. It will be almost a design decision if we wanted to mimic emotions. I didn't see any reason why it couldn't, in theory, but it might be different, or it might be not necessary, or in fact, not desirable for them to have the sort of emotional reactions that we do as humans. Again, it's a bit of an open question, as we get closer to this AGI timeframe, which is on a 5 to 10 year timescale. We have a bit of time, not much time, but some time, to research those kinds of questions.

When I think about how the time frame might be shrunk, I wonder if it's going to be the creation of self-improving systems. Last week, Google DeepMind announced AlphaEvolve, which is an AI that helps design better algorithms. Demis, are you trying to cause an intelligence explosion?

Hassabis: No, not an uncontrolled one. It's an interesting first experiment. It's an amazing system, a great team is working on that. Where it's interesting now, to start pairing other types of techniques, in this case evolutionary programming techniques, with the latest foundation models, which are getting increasingly powerful, and actually to see in our exploratory work a lot more of these combinatorial systems and pairing different approaches together.

And you're right, someone discovering a self improvement loop would be one way where things might accelerate further than they're even going today. And we've seen it before with our own work with things like AlphaZero, learning Chess and Go and any two player game from scratch within less than 24 hours, starting from random with self improving processes. So we know it's possible, but again, those are in quite limited game domains which are very well described. The real world is far messier and far more complex. It remains to be seen if that type of approach can work in a more general way.

We've talked about some very powerful systems. And it's a race. It's a race to develop these systems. Sergey, is that why you came back to Google?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Alex Kantrowitz
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More