The Three Faces Of Generative AI
After the AI field spent years working to make its models smarter, three distinct use cases are emerging.
Nearly three years after ChatGPT’s debut, generative AI is finally settling into a core set of use cases. People today use large language models for three central purposes: 1) Getting things done 2) Developing thoughts, and 3) Love and companionship.
The three use cases are extremely different, yet all tend to take place in the same product. You can ask ChatGPT to do something for you, have it make connections between ideas, and befriend it without closing the window.
Over time, the AI field will likely break out these needs into individual products. But until then, we’re bound to see some continued weirdness as companies like OpenAI determine what to lead with.
So today, let’s look at the three core uses of Generative AI, touching on the tradeoffs and economics of each. This should provide some context around the product decisions modern AI labs are grappling with as the technology advances.
Agent
AI research labs today are obsessed with building products that get things done for you, or ‘agentic AI’ as it’s known. Their focus makes sense given they’ve raised billions of dollars by promising investors their technology could one day augment or replace human labor.
With GPT-5, for instance, OpenAI predominantly tuned its model for this agentic use case. “It just does stuff,” wrote Wharton professor
in an early review of the model. GPT-5 is so tuned for agentic behavior that, whether asked or not, it will often produce action items, plans, and cards with its recommendations. Mollick, for instance, saw GPT-5 produce a one-pager, landing page copy, a deck outline, and a 90-day plan in response to a query that asked for none of those things.Given the economic incentive to get this use case right, we’ll likely see more AI products default toward it.
Thought Partner
As large language models become more intelligent, they’re also developing into thought partners. LLMs are now (with some limitations) able to connect concepts, expand ideas, and search the web for missing context. Advances in reasoning, where the model thinks for a while before answering, have made this possible. And OpenAI’s o3 reasoning model, which disappeared upon the release of GPT-5, was the state of the art for this use case.
The agent is searching for efficiency and wants to move you on to the next thing. The thought partner is happy to dwell.
The AI thought partner and agent are two completely different experiences. The agent is searching for efficiency and wants to move you on to the next thing. The thought partner is happy to dwell and make sure that you understand something fully.
The ROI on the thought partner is unclear though. It tends to soak up a lot of computing power by thinking a lot and the result is less economically tangible than a bot doing work for you.
Today, with o3 gone, OpenAI has built a thinking mode into GPT-5, but it still tends to default toward the agentic uses. When I ask the model about concepts in my stories for instance, it wants to rewrite them and make content calendars vs. think about the core ideas. Is this a business choice? Perhaps. But as the cost to serve the thought partner experience comes down, expect dedicated products that serve this need.
Companion
The most controversial (and perhaps most popular) use case for generative AI is the friend or lover. A string of recent stories — some disturbing, some not — show that people have put a massive amount of trust and love into their AI companions. Some leading AI voices, like Microsoft AI CEO Mustafa Suleyman, believe AI will differentiate entirely on the basis of personality.
When you’re building an AI product, part of the trouble is some people will always fall in love with it. (Yes, there is even erotic fan fiction about Clippy.) And unless you’re fully aware of this, and building with it in mind, things will go wrong.
Today’s leading AI labs haven’t attempted to sideline the companion use case entirely (they know it’s a motivation for paying users) but they’ll eventually have to sort out whether they want it, and whether to build it as a dedicated experience with more concrete safeguards.
In Sum
As AI companies decide which of these use cases to pursue, I expect to come back to this framework when evaluating their choices. My bet is those that most clearly know which they’re pursuing will end up winning the race. It’s going to be a focus game.
Cloudera’s EVOLVE25: Explore the Potential of Bringing AI to Your Data – Anywhere (sponsor)
The future runs on AI, and AI runs on data. To stay competitive, enterprises need secure access to all their data, wherever it resides: across public clouds, data centers, and the edge. That’s exactly what Cloudera delivers.
As the trusted data and AI platform for the world’s largest organizations, Cloudera enables businesses to bring AI to their data—safely, efficiently, and at scale. At EVOLVE25 in New York City, Cloudera will showcase how leading enterprises are unifying and managing data to drive intelligent transformation.
Hear directly from customers and experts on breaking down silos, running AI workloads securely across environments, and accelerating the shift from applications to intelligent agents. Discover how organizations are reducing risk, maximizing ROI, and building AI responsibly on Cloudera’s proven open-source foundation.
Join Cloudera in New York City on September 25 to shape the future of data and AI. Register here.
What Else I’m Reading, Etc.
Apple is leaning toward using Gemini in Siri [Bloomberg]
Google can keep Chrome and Android in a favorable antitrust ruling [Techcrunch]
Jetblue will use Amazon’s satellite internet [WSJ]
Polymarket is ready to come to the United States [Bloomberg]
A U.S. Navy Seal mission in North Korea went disastrously wrong [New York Times]
This Week On Big Technology Podcast: Brain Computer Interface Frontier: Movement, Coma, Depression, AI Merge
Dr. Ben Rapoport and Michael Mager are the co-founders of Precision Neuroscience, a company building a minimally invasive, high-resolution brain-computer interface. The two join Big Technology to discuss the modern day applications of BCIs and frontiers of the technology, including computer control, stroke rehab, decoding consciousness in coma patients, AI-powered neural biomarkers for depression, and the long-term prospect of merging human cognition with machines. Tune in for a fascinating look at the potential for one of earth's most promising technologies.
You can listen on Apple Podcasts, Spotify, or your podcast app of choice
Thanks again for reading. Please share Big Technology if you like it!
And hit that Like Button to put your face next to the heart icon on the post, hopefully we’ll exceed three :)
My book Always Day One digs into the tech giants’ inner workings, focusing on automation and culture. I’d be thrilled if you’d give it a read. You can find it here.
Questions? News tips? Email me by responding to this email, or by writing alex@bigtechnology.com