Generative AI, Misinformation, and the Plausibility Loophole
By now, it's common knowledge that programs like ChatGPT say things that just aren't true, but why do we believe the lies so readily? The answer is F.A.B.S.
Before we get to today's main topic, some miscellaneous goodies…
This worthwhile Economist piece by Yuval Noah Harari is about how AI is effectively our first contact with an alien intelligence. (H/T Andy Maskin.)
I've mostly thought about synthetic voice in terms of audio books and podcast advertising, but this alarming article by Larry Magid warns about how bad people might use it in kidnapping scams. Yikes!
Author Gretchen Morgenson joined Terry Gross for a Fresh Air interview about her new book exposing the negative behavior of Private Equity Firms: These Are the Plunderers. The book is now on my list, and sounds like a useful companion to Chokepoint Capitalism, which I wrote about last month.
Not all the goodies are end-of-days scary this week: check out Jony Ive's gorgeous new typeface "LoveFrom." My question: when can I use it in this newsletter?
This Instagram reel of a kid telling his dad a PG-13 joke (and the dad's immense pride) made me laugh.
Speaking of funny things on Instagram, "The Oakie Dokie" chronicles the very sweet friendship of a tiny baby boy and a humungous Great Dane.
Thanks to my dear friend Professor Benjamin Karney of UCLA for a timely conversation and for sharing the famous psychology article I discuss below.
Lou Paskalis, I was thinking of you when I wrote one sentence in what follows—can you guess which one?
Please follow me on Post and LinkedIn for between-issue insights and updates.
On to our top story...
Generative AI, Misinformation, and the Plausibility Loophole
Most people writing about generative AI (ChatGPT, DALL-E, Bard) focus on what the AIs can do, which is understandable since these algorithms are still new. With ChatGPT, our conversational aperture widened to include worries about how kids might cheat on their schoolwork.
Then the aperture widened again because ChatGPT suffers from "hallucinations" (the euphemism AI boosters use for "lying"): it makes stuff up and then presents its fictions so confidently that people accept them as fact.
It's that last bit that interests me. While I care that ChatGPT is, to put it mildly, an unreliable narrator, what I really want to know is why we humans accept algorithmic misinformation so easily?
I want to know how human behavior adapts because of AI, what we lose and gain, and how quickly we forget that we've changed at all because we suffer from adapt-amnesia. For example, a few issues ago, I explored how Generative AI takes away a lot of the heavy lifting in creating new things, which is a problem since humans only get better at creating by doing all that heavy lifting. You don't get to a masterpiece without making a lot of artisanal crap first.
Humans using and adapting to technology is nothing new. We've used tools to increase the scope of our endeavor since the first time a primate grasped a stick to dig up something yummy. With the speed and de-physicalization of digital technologies, though, our phenomenal adaptability can wind up harming us. Carpal Tunnel Syndrome is a handy example: we can type fast with computers, slouch into a comfy position to free up our hands, and wind up hurting our wrists so we can't type at all.
Generative AI creates a Carpal Tunnel Syndrome of the mind.
What do I mean? Generative AI makes things that are plausible rather than likely, and this is bad because—as Behavioral Economist Daniel Kahneman has taught us in Thinking, Fast and Slow—we humans are terrible at weighing how likely things are. Instead, we choose the best story, the most plausible narrative, rather than the most probable one.
Generative AI, in other words, exploits a Plausibility Loophole that works like a backdoor in an otherwise secure computer system. (For those of you of a certain vintage, think of the movie War Games.)
The Plausibility Loophole is made up of three parts: Familiarity, Automatic Belief, and Satisficing, or F.A.B.S.
Familiarity: ChatGPT and other generative AI technologies create things (narratives, pictures, music) by Pac-Man chomping through everything available online and then recombining bits into newish things based on algorithmic probability.
This is high-speed collage rather than original thinking (although it's getting harder to tell the difference), and since collage is made up of images that we recognize, those recombined images slip through our "wait, is this real?" filters. (In Kahneman's lingo, they bypass skeptical System 2 via more credulous System 1.)
Automatic Belief: There's a famous psychology paper with a hard to parse double-negative title: "You Can't Not Believe Everything You Read." Unpacked, the article argues that we don't first understand a new idea and then choose whether or not to believe it (René Descartes' argument). Instead, understanding and belief go hand in hand—we must believe a statement to understand it, after which we can unbelieve it (Benedict Spinoza's argument), but that's hard.
The authors set up intricate experiments that showed Spinoza was right. The experiments also showed that unbelieving (a.k.a. changing our minds) becomes a lot harder when a) we get interrupted right after we learn and believe something that may not be true, b) we're under time pressure while we're learning and believing that new thing, or c) both.
Chronic interruption and time pressure perfectly describe moment-to-moment human experience in the digital age with multitasking, notifications bleeping, deadlines looming, and most of us suffering from incurable cases of "too many tabs" disease. Years ago, Linda Stone coined the phrase "continuous partial attention" to capture this.
Not only is what AI creates fine-tuned to slip through our skepticism, but it also comes to us when our defenses are down and we're too scattered to say, "hey, wait a minute..."
It gets worse: we don't really care about most of our decisions.
Satisficing: This is the great 20th Century polymath Herbert A. Simon's portmanteau word that is the opposite of optimizing.
When we optimize, we lean in, do the cost-benefit analysis, and then make a painful choice because the decision is important: an expensive purchase like a car or house, a high-stakes choice like which college to attend or whom to marry. We don't optimize many decisions in life, which is a good thing because optimizing is exhausting. Also, a lot of the time we don't know that we should have optimized a decision until it's way too late. This is called life.
In contrast, when we satisfice, we're looking for a good-enough option and we don't want to do a lot of work to find it. We satisfice most of our decisions. Where should we grab dinner? Which shoes should I wear? "What should we watch?" "I don't know—let's see what's on..."
People don't use ChatGPT because they're looking for the perfect option for a decision that they care a lot about. ChatGPT is always a shortcut. "My stupid boss needs this slide deck tomorrow morning." "I need to sound smart about an article I don't have time to read." "I have to pull an op-ed out of this white paper, and I didn't write the white paper."
F.A.B.S. combines low-stakes decisions, chronic distraction, and easy-like-Sunday-morning familiar ideas to create the Plausibility Loophole through which misinformation speeds like a squirrel that just saw an off-leash Pit Bull.
Generative AI's ability to make more of everything exacerbates the problem because we're so busy triaging massive amounts of information that we don't have time to think about the quality of that information.
We all know what to do to close the Plausibility Loophole: stop, focus, think about whether what you're seeing makes sense, ask yourself, "is this any good?" Don't just like, share, or forward things...
The problem is that asking people to do these things is like asking them to drive 35MPH in the fast lane: it's uncomfortable, annoys other people, and might lead to a big pile-up.
Perhaps just knowing about the Plausibility Loophole is a start.
Thanks for reading. See you next Sunday.