Why The AI Ethics War Will Make The Content Moderation Fight Seem Tame
“Everyone’s got their knives sharpened," says one insider.
Now that AI programs speak with us in natural language, turn our thoughts into illustrations, and embody our voices, a major conflict over their ethics is en route.
And if you thought the content moderation fight was intense, just wait for this one.
At stake is how chatbots address political issues, how AI illustrators portray the world, and whether some applications like voice emulators should even exist. Given the scale and power of this blossoming technology, the activists won’t be subtle. They’ve had their practice fighting over human speech online, and they’ll bring that experience to this war. It could get messy quickly.
“Everyone’s got their knives sharpened,” said Sam Lessin, a venture capitalist, and former Facebook executive. “At least with speech, everyone was a little bit off-kilter and didn't really get it. This one, they're like, ‘Oh shit, I've seen this game before.’ Every single lobby in the world is ready to write their letters and start their influence campaigns.”
AI’s intelligence may be artificial, but humans encode its values. OpenAI, for instance, effectively decides whether ChatGPT takes stances on the death penalty (no opinion), torture (it’s opposed), and whether a man can get pregnant (it says no). With its AI illustrator Dall-E, the organization influences what type of a person the tech portrays when it draws a CEO. In each case, humans behind the scenes make decisions. And humans are influenceable.
Like content moderation, there will be some obvious, consensus ethical decisions for generative AI (you don’t want chatbots advocating for genocide, for instance) but advocates will stake their ground in the grey. “It’s a very powerful tool, and people are going to want to do a broad range of things with it to meet their own interests,” said Lessin. “If you look at how the free speech stuff played out, it will play out the same way again, just faster.”
The potential conflict areas include how AI addresses race, gender, warfare, and other thorny issues. ChatGPT, in one recent conversation, listed several benefits of Ukraine winning the war against Russia. But asked to list positive outcomes from Russia winning the war, it refused. ChatGPT also moralizes a lot. “War is a grave and devastating event, and should always be avoided if possible,” the bot said, in one typical interaction. “The best outcome is always peace and diplomatic resolution.”
Ethical decisions for generative AI are particularly high stakes because they scale. When you encode values into a chatbot, it can push those values repeatedly in conversation. When you make a content moderation decision, in most instances, it involves just one individual and one piece of content.
The best way to handle this new power is to have the bots play it as evenhanded as possible, said Dr. Jeffrey Howard, a professor of political science at London’s UCL. “These value judgments are inescapable,” he said. “One of the value judgments could be to build in a certain kind of neutrality and impartiality.”
Ultimately, generative AI’s decentralization may let out some of the tension. While speech is relatively centralized online today, there are many developers working on generative AI. And as developers build apps with their own morals, an all-out war over the central powers’ policies may fade. But in the meantime, expect plenty of positioning, cajoling, and fighting over the ethics the big players build into their models.
60/40 Portfolio is 💀 (sponsored)
Stocks fell to end Wall Street's worst year since the financial crisis of 2008 leaving investors reeling and searching for alternatives to stocks & bonds. Supervest is providing investment opportunities to investors to create portfolios that go beyond the typical 60/40. By empowering people to invest in alternatives such as small business loans with target returns of 12% - 25%, Supervest is helping people to build passive income and retire better.
What Else I’m Reading
I’m trying a new format with this section! There’s now a bit more commentary on each link, plus a destination so you’ll know if it’s paywalled. Hopefully, it will be more useful! Let me know what you think :)
What happens when Tesla Autopilot crashes. [NYT]
Fear, chaos, and ambition at Elon Musk’s Twitter. [The Verge]
Apple indefinitely postponed its lightweight AR glasses. [Bloomberg]
Robinhood is starting a blog. Sorry, I meant a financial news “subsidiary.” [Robinhood]
AI did a terrible job writing for CNET, which is now correcting its ‘reporting.’ [Futurism]
Semafor will buy out SBF, its biggest investor [NYT]
What investor Fred Wilson anticipates in 2023 [AVC]
Getty Images is suing Stable Diffusion [The Verge]
Number Of The Week
Decline in Twitter revenue, year over year. Its first major interest payment on its debt is due this month.
Quote Of The Week
“More than anything, Davos is a prophylactic against change, an elaborate reinforcement of the status quo served up as the pursuit of human progress.”
Davos Man author Peter Goodman on the annual gathering of global elite, taking place this week.
Advertise with Big Technology?
Advertising with Big Technology gets your product, service, or cause in front of the tech world’s top decision-makers. To reach 110,000+ plugged-in tech insiders, please write alex dot kantrowitz at gmail dot com
Substacks You Might Like
We’re back with another installment of Substacks you might enjoy reading. This is a cross-promo between Big Technology and the below newsletters. Here are two you might like:
Check out Wonder Tools, a free weekly email that helps you discover & make the most of useful sites and apps. Join 17k readers in getting this independent newsletter by a former Time Mag tech reporter who is now a CUNY Journalism prof. A well-known reader recently Tweeted "One of the most useful weekly newsletters I get." Sign up for free
Alex is a Sr Staff Engineer with more than 2 decades of software engineering experience. He asks too many questions and takes notes. Then shares the best parts here.
This Week On Big Technology Podcast: Amazon’s Identity Crisis — With Todd Bishop
Kate Rooney is a star reporter at CNBC who covers crypto. She's reported deeply on crypto exchange FTX's collapse and spent time with its disgraced ex-CEO — Sam Bankman-Fried — in the Bahamas. Rooney joins Big Technology Podcast for a discussion about what happens to crypto now that its promise of 'trustless' finance has crumbled. Join us for a deep discussion of the industry's future, the responsibility of journalists covering it, and the lessons from the collapse.
You can listen on Apple, Spotify, or wherever you get your podcasts.
Thanks again for reading. Please share Big Technology if you like it! And hit that like button if you like first drafts
Questions? Email me by responding to this email, or by writing firstname.lastname@example.org
News tips? Find me on Signal at 516-695-8680
AI's will perfectly enable global surveillance, and complete social control .
You will be "filed, stamped, indexed, briefed, debriefed, and numbered."
Not a leaf will fall. . .
AI's will be used to create perfect, inescapable totalitarian states.
Bet on it. It's the end of the world.
My main concern is access to the technology and making it accessible to both the rich and the poor.