Scarier than Skynet: AI and Persuasion
Most dystopian fantasies concern monsters we can see conquering us, but with new technologies will we even know if we’ve been conquered?
Before we get to today's main topic, some miscellaneous goodies…
Perry Bible Fellowship comics. If you’re fond of The Far Side or Whyatt, then check out the Perry Bible Fellowship’s funny, dirty, anarchic cartoons. They’re celebrating their 15th anniversary.
Conspiracy Rock! My son shared this SNL skit parodying Schoolhouse Rock from 1998. It’s great, but don’t believe the conspiracy theories about it being banned. Snopes has the scoop.
I cried: Last month’s Superman: Son of Kal-El #17 contains a beautiful coming out story. Jon Kent, the younger Superman, is reluctant to tell his father, Clark, that he is in a gay relationship. Clark already knows, and talks it over with his father, Jonathan, who encourages patience. When Jon and Clark finally talk, it’s an amazing conversation where two physically invulnerable beings are emotionally vulnerable with each other.
Politics: While some folks are upset that Arizona Senator Krysten Sinema has gone independent, to me the timing was revelatory: she waited until Senator Warnock won re-election and continued Democratic control was secure.
Please follow me on Post and/or on LinkedIn for between-issue insights and updates.
On to our top story...
Scarier than Skynet: AI and Persuasion
You can tell a lot about a culture by its dystopias: its fantasies of fear. When you have dueling fantasies, you can tell even more by what they agree on and what they miss.
Our most common dystopias are still analog: an enemy takes over, and we fight or we don’t, but everybody knows who the good guys and bad guys are. The classic example of this is George Orwell’s 1984 (it came out in 1949), which shows how a regime takes over both the world and the minds of its citizens through constant surveillance: “Big Brother is Watching You.” Margaret Atwood’s The Handmaid’s Tale (1985) is similar, but in Atwood’s Gilead a male regime enslaves women.
On the surface, it looks like the technological versions of these dystopias are digital, but they’re still analog: in The Terminator movies (starting in 1984), Skynet is a machine regime seeking to eradicate humans. In The Matrix movies (starting in 1999), the machine regime uses humans as Duracell batteries to power their VR world. In both, there is still a clear dichotomy: good guys versus bad guys.
The real digital dystopia happens when you don’t know you’ve been conquered and you don’t know that any alternatives to captivity exist.
In Amusing Ourselves to Death (1985), Neil Postman pointed out that Aldous Huxley’s Brave New World (1932), where citizens embrace the technologies of their oppression, more accurately predicted how technological dystopias would work:
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture. (xxi)
Side Note 1: this passage gained spooky oomph for me during the FTX debacle when we learned that founder, former CEO, and possible future jailbird Sam Bankman-Fried “would never read a book.” (BTW, “spooky oomph” would be a great band name.)
Side Note 2: I wrote about Postman/Orwell/Huxley at greater length back in June.
Both Huxley and Orwell wrote decades before the internet, so neither could imagine a world where what you see is different than what everybody around you sees.
That’s our world today.
What does this have to do with AI?
AI is neither inherently good nor inherently bad: it’s a tool. Like all tools, people can use AI to transform the world in good ways and in bad ways. In what follows, I’m not making any general claims about AI: I’m talking about one disturbing future use case.
If you follow technology news even distantly, then over the last few days you’ve seen articles about ChatGPT (the AI that generates more than passable prose almost instantly). If you’re on any social media platform where users have profile pictures, then you’ve also seen people post dramatic new pics created by Lensa, an insanely popular AI program that takes your photos and transmogrifies them into “magic avatar” versions of you as a superhero, Biblical patriarch, astronaut, explorer, etc.
Lensa has some indigestion-prompting problems around user privacy (you don’t have any) and also around misogyny and racism (do we really need to see Amelia Earhart nude in bed?) Even scarier is how many people clicked “OK” (because it’s just a toy, right?), paid a few bucks, and got back magic avatars to post without realizing what they were giving away.
The combo-platter of ChatGPT and Lensa shows us where AI-powered advertising is headed over the next few years, particularly in logged-in environments like streaming or digital video where the service knows who you are and what you’re watching.
As you watch a show, AIs will hold a real-time auction for the ad coming next—this is how Google works with sponsored search results today.
Tomorrow, the winner of that auction (maybe a brand you recognize like Tide, maybe something you’ve never heard of before) won’t have a ready-made ad to insert. Instead, the algorithm will have a portfolio of assets that it will instantaneously combine to generate the most persuasive version of that ad for you.
If the algorithm knows, say, that you’re fond of George Clooney movies, then it might use a synthesized version of Clooney’s voice to narrate the ad. (Synthesized celebrity voices are already here, although not in real-time.)
If you’re watching YouTube, and if Google knows from your recent search history that you’re in-market for a four-door Asian sedan, then the ad generated for you might show a person who looks like you, happily driving a Subaru (or a Toyota, or…) through a neighborhood that looks like yours, going to places that are like the places you go (which Google knows from your use of Google Maps). If Google knows you ski, then it might show a Toyota Highlander driving through snow towards the slopes.
These are the kinder, gentler versions of algorithm-driven advertising. The truly scary ones come when the algorithm knows how you’re feeling at that moment because of the biometric data coming off your smartwatch or because a camera built into the TV can see you wince because your back hurts… and then the ads change accordingly.
Near as I can tell, the biggest hurdle for algorithmic advertising comes when more than one person is watching a show in the same room. How does the algorithm optimize if La Profesora (my wife Kathi) and I are watching a show together?
Is there anything wrong with this?
When we think something is common knowledge, it becomes easier to believe. So
hyper personalized communications that look like mass culture common knowledge are dangerous.
The ethical issue around hyper personalized advertising isn’t the hyper personalization: it’s the information imbalance between the advertiser and the audience. It’s the fact that we don’t know that the ad has been personalized. We think that everybody is seeing the same ad, that the message it conveys is common knowledge, and that therefore the idea the ad is selling is thinkable.
When it comes to buying toothpaste or detergent or a pair of bluejeans, this isn’t a big deal. However, as we saw in the Cambridge Analytica scandal around the 2016 election, the stakes are higher. See also anti-vaxxers, election deniers, and more.
Algorithmically-generated ads look like a group is talking when it’s only a program.
Groups are persuasive.
For months, I’ve been looking for a documentary I saw back in college about how a young man gets sucked into the Unification Church cult run by Sun Myung Moon: the Moonies. I finally found it in the Internet Archive! Moonchild (1983) is by filmmaker Anne Makepeace.
This short movie (under an hour) shows how the cult surrounds prospective members with people who are already members, isolating them from people who aren’t part of the cult. Everybody in the group is working together, but the prospective member does not know that. There is a total information imbalance.
We humans have a strong urge to conformity with our groups, no matter how arbitrary those groups might be (see the unethical Stanford Prison Experiment from the 1970s).
In Moonchild, it takes a dozen or so people to persuade one prospective candidate into the cult because old style cults are analog. Algorithms, like the ones I’m imagining around future ads, will be able to do this at scale in a brave new world of digital persuasion.
What can we do?
I’m still thinking this through, and I would love to hear suggestions from you.
One idea is regulatory: the Federal government should demand two things: 1) that any company hyper personalizing ads or content must disclose that fact to the audience; 2) that audiences must have a perpetual “opt out” from hyper personalization.
With the Legislature about to be paralyzed in the new Congress, though, hope for this sort of action is dim.
It’s unfair to ask individuals to stay vigilant against the efforts of trillion-dollar corporations, but I’ll still ask. Think before you click. Double check before you share. And please don’t presume the worst of other people even if they don’t share your views.
Like the kids sing in High School Musical, we’re all in this together.
Thanks for reading. See you next Sunday.
Brad, insightful, compelling, and necessary. You are theShakespeare scholar who brings valuable insights to help us cope in 2022, 23 and on. Arthur Gilbert
Great article that poses questions most of humanity should definitely consider. Similar to the threat from ChatGPT to writers, the rise of AI generated imagery is a direct threat to artists. As an Illustrator, I'm seeing how it's already impacting my field from art directors, editors and cheap publishers opting to use such images in place of hiring a human illustrator.
But that said, there's a bigggggg issue that no one has addressed yet and will almost certainly be an issue in the future: Copyright.
If AI algorithms are cobbling together images (or words) from the works that creatives have posted on the web when will there be a serious lawsuits over this creative theft? I suspect it will be fairly soon after someone creates an app that can decode the parts of an image (or written work) and then locate the original art that it was stolen from. If we think the Shepard Fairey lawsuit over his use of the Obama photo as reference was messy, we better mentally prepare for what's coming.
At least in Europe they've made some tiny strides in controlling these AI overreaches but still we better all act fast and step up the legal fight, if any of us want to maintain any kind of creative relevancy.