What Is Sora For?
OpenAI's new video generator is high on cool factor but low on utility, at least for now.
On the ‘featured’ tab of its new Sora video generator, OpenAI highlights a bunch of standout AI video clips. There’s a panda riding the subway, an alien smoking a cigarette, a paper boat navigating a stormy sea, and a golden statue winking at you.
The videos are stunning outputs of a marvelous new technology, but who knows what they’re for. Eye catching and creative but too low quality to insert into a commercial production, Sora clips exist in a state of liminality. Yes, the product understands at least some physics, and AI video may be the tech breakthrough of the year. But like many generative AI products so far, it’s not entirely clear what we’re supposed to do with it.
AI video will certainly improve, but Sora and its counterparts seem high on cool factor and low on utility, at least for now. The problem: There is no natural user. With ChatGPT, coders and students saw immediate value, and AI text generation has since expanded to more use cases. Image generators like Dall-E haven’t broken through in the same way though, struggling to find natural applications of their richer media format. Sora, similarly, isn’t good enough to generate clips for feature films, or even commercials, and is a bit too intense to be useful to regular people. So its use case remains fuzzy.
In a blog post announcing Sora’s general release this week, OpenAI said it hopes it “will enable people everywhere to explore new forms of creativity, tell their stories, and push the boundaries of what’s possible with video storytelling.” But as someone who just learned how to edit video, I can attest that doing anything with video is hard. Even with Sora’s unbelievable power in everyone’s hands, it’s hard to imagine reworking the internet’s 90-9-1 rule, where 90% of people consume, 9% distribute, and 1% create.
Looking through Sora’s ‘recent’ tab shows some interest but bafflement over what to do with the service. One user put a dog in a driver seat, another put a cat in a sailor hat, another showed a horse strolling through a graveyard at night. There are lots of animals. And lots of women, some prompted with a creepy amount of detail. The videos seem to let users escape to other worlds, or “push boundaries,” as OpenAI suggests. But once you prompt a few times, the compelling reason to come back — and pay — becomes harder to find. How many puppies driving cars do you need to see?
Sora will, of course, find some valuable applications. It will allow movie directors to plot out scenes before shooting them. It will enable fashion brands to see models wear their work on a runway before creating it. And it will help brand managers cook up funky posts for Instagram. Yes, these AI videos will likely fill our social media feeds like Shrimp Jesus has filled Facebook.
But Sora is also debuting at a time when determining what’s real is harder than ever, and the service and its peers will add to the confusion. This past week, I found the United Healthcare shooting story harder to follow than almost any other previous major story. There was a fake Substack and loads of fake information online. But the fake videos of the shooter were most puzzling. Multiple users generated fake AI videos of the shooter from surveillance video. And while some explicitly said it was AI, others shared videos they insisted were not. It all contributes to a sense of reality apathy, where telling true from false is so hard you just give up.
To OpenAI’s credit, Sora’s safeguards are pretty good. The service wouldn’t let me create videos from images of people, and it blocked my prompts after I tried to get it to generate videos of Trump dancing and the UHC shooter getting arrested at McDonalds. OpenAI declined to make a member of the Sora team available for interview.
Perhaps by focusing so much on the video generation aspect I’m missing the point. Sora can create cool videos, but the basic premise of the product is to enhance artificial intelligence’s understanding of the real world beyond what’s depicted in text. “Sora serves as a foundation for AI that understands and simulates reality,” OpenAI wrote in its announcement, “an important step towards developing models that can interact with the physical world.”
That interaction with the real world could mean applying Sora’s intelligence in robotics, or perhaps helping models that understand the planet they’re communicating about. If that turns out the case, it may be exactly what Sora is for.
Security questionnaires report: the impact of automation (sponsor)
Security questionnaires are a massive burden. Almost every customer or prospect requires them, and they can be lengthy, repetitive, and require manual back and forth that distracts security teams from actually running their security program.
But, using automation, industry-leading companies complete security questionnaires up to 5x faster. No more clunky spreadsheets or long email chains. Automation is disrupting the status quo—with proven results.
In this report from Vanta, you’ll learn:
How automation is being used to answer security questionnaires
How much time real companies save by automating security questionnaires
How often teams do—and do not—have to step in to review auto-generated answers
Advertise on Big Technology?
Reach 150,000+ plugged-in tech readers with your company’s latest campaign, product, or thought leadership. To learn more, write alex@bigtechnology.com or reply to this email.
What Else I’m Reading, Etc.
Twitter co-founder Ev Williams built a new social networking app [New York Times]
Google introduces NotebookLM for enterprises [TechCrunch]
The United Healthcare shooter’s online trail doesn’t actually say much about him [New York]
Netflix 1 year parental leave policy really means six months at most [WSJ]
The Love Is Blind cast should be considered employees, not contractors [New York Times]
Q&A with Scale AI founder Alexandr Wang from this week’s podcast [Big Technology]
Number of The Week
$1 million
Meta donation to Donald Trump’s inauguration fund, coming just a few months after Zuckerberg told associates he wanted to distance himself from politics. Jeff Bezos and Sam Altman soon followed with similar sized donations.
This Week on Big Technology Podcast: AI Predictions for 2025: Geopolitics, Agents, and Data Scaling — With Alexandr Wang
Alexandr Wang is the CEO and co-founder of Scale AI. He joins Big Technology Podcast to share his predictions for AI in 2025, including insights about emerging geopolitical drama in the AI field, AI agents for consumers, why data may matter more than computing power, and how militaries worldwide are preparing to deploy AI in warfare. We also cover quantum computing and why Wang believes we're approaching the current limits of what massive GPU clusters can achieve. Hit play for a mind-expanding conversation about where artificial intelligence is headed and how it will transform our world in the coming year.
Thanks again for reading. Please share Big Technology if you like it!
And hit that Like Button it’s a very useful piece of technology, after all
My book Always Day One digs into the tech giants’ inner workings, focusing on automation and culture. I’d be thrilled if you’d give it a read. You can find it here.
Questions? News tips? Email me by responding to this email, or by writing alex@bigtechnology.com Or find me on Signal at 516-695-8680