Google DeepMind CEO Demis Hassabis: The Path To AGI, LLM Creativity, And Google Smart Glasses
Q&A with the Google AI head and Nobel laureate on the state of artificial intelligence today, and where it's heading.
Demis Hassabis is refreshingly measured when discussing the path toward AGI, or human-level artificial intelligence.
The Google DeepMind CEO and recent Nobel Prize winner doesn’t believe any research house will reach AGI this year, and he’s quick to call out those who hype the technology in the name of business goals. But that doesn’t mean he’s not ambitious.
In a wide-ranging interview at Google DeepMind offices in London, Hassabis laid out his vast plans for building smarter AIs, putting Google’s assistants in smart glasses, and using AI to develop virtual cells to attack disease. He also spoke plainly about the challenge of getting LLMs to be creative, and how recent models have tried to deceive their evaluators.
You can listen to (or watch) our full conversation on Apple Podcasts, Spotify (now with video), your podcast app of choice, or YouTube. And the full transcript of our conversation is below, edited lightly for length and clarity.
In this Q&A, Hassabis delivers a masterful deep dive into the state of artificial intelligence today and what’s to come, and I hope you give it a listen or read:
Alex Kantrowitz: Every AI research house is working toward building AGI, or human-level artificial intelligence. Where are we right now in the progression and how long will it to take to get there?
Demis Hassabis: There’s been an incredible amount of progress over the last few years, and actually, over the last decade plus. We've been working on this for more than 20 years, and we've had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can. I think we're getting closer and closer, but we're still probably a handful of years away.
What is it going to take to get there?
The models today are pretty capable, but there are still some missing attributes: things like reasoning, hierarchical planning, long-term memory. There's quite a few capabilities that the current systems don't have. They're also not consistent across the board. They're very strong in some things, but they're still surprisingly weak and flawed in other areas. You'd want an AGI to have pretty consistent, robust behavior across the board for all cognitive tasks.
One thing that's clearly missing, and I always had as a benchmark for AGI, was the ability for these systems to invent their own hypotheses or conjectures about science, not just prove existing ones. They can play a game of Go at a world champion level. But could a system invent Go? Could it come up with relativity back in the days that Einstein did with the information that he had? I think today's systems are still pretty far away from having that kind of creative, inventive capability.
So a couple years till we hit AGI?
I think we're probably three to five years away.
If someone were to declare that they've reached AGI in 2025, that’s probably marketing?