In Pursuing Human-Level Intelligence, The AI Industry Risks Building What It Can’t Control
Instead of asking whether AI can achieve something, perhaps we should ask whether it should.

In front of a packed house at Amsterdam’s World Summit AI on Wednesday, I asked senior researchers at Meta, Google, IBM, and The University of Sussex to speak up if they did not want AI to mirror human intelligence. After a few silent moments, no hands went up.
The response reflected the AI industry’s ambition to build human-level cognition, even if it might lose control of it. AI is not sentient now — and won’t be for some time, if ever — but a determined AI industry is already releasing programs that can chat, see, and draw like humans as it tries to get there. And as it marches on, it risks having its progress careen into the dangerous unknown.
“I don't think you can close Pandora's box,” said Grady Booch, chief scientist at IBM, of eventual human-level AI. “Much like nuclear weapons, the cat is out of the bag.”
Comparing AI’s progress to nuclear weapons is apt but incomplete. AI researchers may emulate nuclear scientists’ desire to achieve technical progress despite the consequences — even if the danger is on different levels. Yet more people will access AI technology than the few governments that possess nuclear weapons, so there’s little chance of similar restraint. The industry is already showing an inability to keep up with its frenzy of breakthroughs.
The difficulty of containing AI was evident earlier this year after OpenAI introduced Dall-E, its AI art program. From the outset, OpenAI ran Dall-E with thoughtful rules to mitigate its downsides and a slow rollout to assess its impact. But as Dall-E picked up traction, even OpenAI admitted there was little it could do about copycats. "I can only speak to OpenAI,” said OpenAI researcher Lama Ahmad when asked about potential emulators.
Dall-E copycats arrived soon after and with fewer restrictions. Competitors including Stable Diffusion and Midjourney democratized a powerful technology without the barriers, and everyone started making AI pictures. Dall-E, which only onboarded 1,000 new users per week until late last month, then opened up to everyone.
Similar patterns are bound to emerge as more AI technology breaks through, regardless of the guardrails original developers employ.
It’s admittedly a strange time to discuss whether AI can mirror human intelligence — and what weird things will happen along the way — because much of what AI does today is elementary. The shortcomings and challenges of current systems are easy to point out, and many in the field prefer not to engage with longer-term questions (like whether AI can become sentient) since they believe their energy is better spent on immediate problems. Shorttermists and longtermists are two separate factions in the AI world.
As we’ve seen this year, however, AI advances in a hurry. Progress in large language models made chatbots smarter, and we’re now discussing their sentience (or, more accurately, lack of). AI art was not in the public imagination last year, and it’s everywhere now. AI is also now creating videos from strings of text. Even if you’re a shorttermist, the long-term can arrive ahead of schedule. I was surprised by how many AI scientists said aloud they couldn’t — and didn’t want to — define consciousness.
There is an option, of course, to not be like the nuclear weapon scientists. To think differently than how J. Robert Oppenheimer, who led work on the atomic bomb, put it. “When you see something that is technically sweet,” he said. “You go ahead and do it and you argue about what to do about it only after you have had your technical success.”
Perhaps more thought this time would lead to a better outcome.
Build the best remote team with UpStack's world-class software developers (Sponsored)
UpStack helps you find the best developer for your project. Assess your needs in a quick, 15-minute discovery call with our Client Success Team. That’s all it takes to start our search for your perfect match within our pre-vetted candidate pool.
What Else I’m Reading
What Else I’m Reading
Mark Zuckerberg is still stoked about the metaverse. Apple is preparing to push into TV-style advertising on its original programming. Turns out Covid didn’t drive shopping online forever. Manhattan Venture Partners wants out of funding Musk’s Twitter deal. VCs pay thousands for Twitter ghostwriters. Truth Social is returning to the Play Store. Will parkour solve the energy crisis? What happens when people donate their bodies to science. A profile of Pennsylvania senate candidate John Fetterman.
Number Of The Week
£10 million
Approximate worth of paintings that artist Damien Hirst will burn after promising NFT owners he’d destroy his original work after they purchased its digital counterpart.
Quote Of The Week
“Seriously, legs are hard”
Mark Zuckerberg on the challenge of bringing legs to the metaverse.
Advertise with Big Technology?
Advertising with Big Technology gets your product, service, or cause in front of the tech world’s top decision-makers. To reach 80,000+ plugged-in tech insiders, please reply to this email. We have availability starting in November.
This Week On Big Technology Podcast: Will The Fed Blink And Save Tech — With Ranjan Roy
Ranjan Roy is the co-author of Margins, a Substack newsletter about the financial markets. He joins Big Technology Podcast for a conversation about the Federal Reserve's steep interest rate raises, how they've harmed tech valuations, and whether the Fed might reverse course and bring the party back. Stay tuned for the second half where we discuss the short-form video wars and the likely outcome of Elon Musk's pursuit of Twitter.
You can listen on Apple, Spotify, or wherever you get your podcasts.
Thanks again for reading. Please share Big Technology if you like it! And hit that heart if like your robots friendly.
Questions? Email me by responding to this email, or by writing alex.kantrowitz@gmail.com
News tips? Find me on Signal at 516-695-8680
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461