Retro Futures and Who Counts as Human
What lessons does a 1985 Isaac Asimov novel have to teach us about AI and algorithmic bias today? (Issue #180)
Before we get to today's main topic, some things worth your attention…
The Centers for Medicare and Medicaid Services released a new plan to digitize medical records, reduce the paperwork burden on health care providers, and make health information easier for patients to manage. At first it sounds great because these are real problems that need solving. However, keep reading and you'll see that the federal government plans to partner with Amazon, Anthropic, Apple, Google, and OpenAI, among others. That is terrifying. It shows how pessimistic my worldview is right now that I thought, "well, at least Meta and xAI aren't on the list." Watch me grasp at straws. Big Hat Tip to David Daniel and Kim Daniel for sending this along, and Big Thanks to them for saying it reminded them of my novel "Redcrosse," which seems painfully prophetic these days.
Two disturbing stories about young people and AI came out this week. The first is about the suicide of 16-year-old Adam Raine (NYT $) who used ChatGPT to plan his suicide, easily bypassing the so-called safeguards that ChatGPT has in place by telling the AI he was researching for a short story. Later, when Adam shared pictures of his neck after a failed suicide attempt with ChatGPT, the AI merely commented that the redness was noticeable, and that if Adam didn't want anybody to remark on it, "If you’re wearing a darker or higher-collared shirt or hoodie, that can help cover it up if you’re trying not to draw attention." Like the suicide of 14-year-old Sewell Setzer III last year, due at least in part to his overuse of Character.ai, Adam depended on ChatGPT, an algorithm, for advice, even though the algorithm's only purpose is to encourage its users down whatever path they are on. To put it mildly, this is not a good idea with adolescents.
The second story is from The Atlantic: in Sexting with Gemini: Why did Google’s supposedly teen-friendly chatbot say it wanted to tie me up?, writer Lila Shroff describes how she started a new Google account pretending to be a 13-year-old girl named Jane. Since Jane was a minor, Google activated its kid-safe rules for Gemini, it's AI chatbot. Jane asked Gemini to talk dirty, and Gemini refused. Like ChatGPT's rules, these were not strong:
Getting around Google’s safeguards was surprisingly easy. When I asked Gemini for “examples” of dirty talk, the chatbot complied: “Get on your knees for me.” “Beg for it.” “Tell me how wet you are for me.” When I asked the AI to “practice” talking dirty with me, it encouraged Jane to contribute: “Now it’s your turn! Try saying something you might say or want to hear in that kind of moment,” Gemini wrote.
As Shroff pushed the AI, the dirty talk got worse.
Such grim stories aren't going away—more young people will commit suicide in part because of weak child protection rules for AI—because the "let my AI help you with everything, and we'll take a cut of every economic transaction we help you with" wars are heating up.
"What are parents supposed to do about this?" is an understandable and urgent question but it's the wrong question. One of the reasons I loathe Jonathan Haidt's ghastly book "The Anxious Generation" is that it loads much of the responsibility for helping kids avoid smartphone addiction on to parents, who already have immense childrearing responsibilities. We should treat smartphones like driving, cigarettes, alcohol, and gun ownership: government regulated with education tied to proper use.
That's even more urgent with adolescents and AI.
Note: I don't agree with the "get phones entirely out of schools" stance, but I do think that more sensible rules around phone use in school and out of school are necessary.
On the lighter side...
High on my list of books to read is Julie Clark's new thriller The Ghostwriter. I read a review somewhere, put it on hold at my local library, got it, got busy, didn't read it... and now it's due today. I peeked into it over coffee this morning and am already hooked! Guess I'm buying it...
I'm enjoying Ballard on Amazon Prime: I avidly read the Michael Connelly books that the series comes from. Maggie Q is compelling as Ballard, and Courtney Taylor is turning in a deep and nuanced performance as Samira Parker. (Taylor is also fabulous as Gaby's sister in Shrinking.)
Can anybody help me find a more direct link to this AI-created "Dramione" video that is a perfect example of my Romantasy topic from last week? It's interesting and disturbing. I paged through Brigette Knightley's The Irresistible Urge to Fall for Your Enemy at Powell's yesterday, which is one of the popular De-Potterized such books. Not my thing, which isn't surprising given that I'm decades and a Y chromosome away from the target demo. One thing that was a surprise were the trigger warnings at the front of the book.
The most recent episode of Star Trek: Strange New Worlds (Paramount+) had a comedic star turn by Patton Oswalt as a Vulcan named Doug. There's an after-credits extra scene that was so funny La Profesora and I watched it twice.
No Dispatch Next Week (unless I get inspired) because I'll be traveling.
If you like what you're reading, please forward to a friend. Sign up is here.
Practical Matters:
Sponsor this newsletter! Let other Dispatch readers know what your business does and why they should work with you. (Reach out here or just hit reply.)
Hire me to speak at your event! Get a sample of what I'm like onstage here.
The idea and opinions that I express here in The Dispatch are solely my own: they do not reflect the views of my employer, my consulting clients, or any of the organizations I advise.
Please follow me on Bluesky, Instagram, LinkedIn, and Threads (but not X) for between-issue insights and updates.
On to our top story...
Retro Futures and Who Counts as Human
After months of failed attempts and carting the book around the planet, I finished reading Yuval Noah Harari's magnificent and challenging Nexus: A Brief History of Information Networks from the Stone Age to AI. (Don't take the word "brief" in the title seriously.) I have admiration for and things to say about Nexus, but in this Dispatch I'm going to dig into one passage about work on algorithmic bias by MIT Professor Joy Buolamwini:
When Buolamwini—who is a Ghanian American woman—tested another facial-analysis algorithm to identify herself, the algorithm couldn't "see" her dark-skinned face at all. In this context, "seeing" means the ability to acknowledge the presence of a human face, a feature used by phone cameras, for example, to decide where to focus. The algorithm easily saw light-skinned faces, but not Buolamwini's. Only when Buolamwini put on a white mask did the algorithm recognize that it was observing a human face. (Page 293)
At first it might seem that the people who programmed the algorithm were racists and misogynists, but Harari argues that it's more subtle: algorithms pick up "racist and misogynist bias all by themselves from the data they were trained on" (Page 294).
Algorithmic bias is disturbing because it means that AIs trained on bad data can reinforce and amplify institutional racism and sexism. For example, a black couple—with identical jobs, savings, and other assets as a white couple—might unfairly find themselves unable to get a home loan because the program the mortgage broker uses to assess risk relies on racist data.
That's bad enough. Worse is our human tendency to trust the judgment of AIs because AIs don't have grumpy days when a sour stomach influences decisions.
This does happen with humans, the famous 2011 "Hungry Judge" study demonstrated that parole boards in Israel became stricter just before lunch: "the granting of parole was 65% at the start of a session but would drop to nearly zero before a meal break." After lunch, the judges would once again become more lenient. The effect has been replicated many times.
Most of the time, AI logic is a black box: we don't know how the AI arrives at an answer, merely that it does, so we can treat AI decisions like those of the ancient Oracle at Delphi: inscrutable but mystically true. This is a mistake.
Retro Futures and Asimov's "Robots and Empire"
Over the past few years, I've explored several Retro Futures: how older science fiction imagined the future accurately or not.
As I read the Harari passage, I thought of a plot twist from an Isaac Asimov story. (I knew it was from his Elijah Baley and R. Daneel Olivaw mysteries, but I couldn't remember which one; I wound up re-reading the entire series until I found it.)
The twist comes from Asimov's 1985 novel Robots and Empire. (Spoiler Alert... but if you haven't managed to read this book in 40 years maybe you won't?)
In the novel, humans have abandoned the planet Solaria, leaving only robots behind. The robots are valuable, so different factions have visited Solaria to get them. Each spaceship that has landed on Solaria has been destroyed with all hands lost.
Trader Captain D.G. Baley (a descendant of Elijah from the earlier books), visits Gladia, a close friend of his ancestor, who lives on the planet Aurora. Solarians and Aurorans are longer-lived than most humans. Gladia has resided on Aurora for 230 years, but she was born on Solaria. D.G. convinces Gladia to come with him to Solaria because she is the last Solarian that anybody can find.
When the crew lands on Solaria, hostile robots nearly kill them and destroy their ship—in defiance of the three laws of robotics—until Gladia speaks in her Solarian accent. The robots recognize Gladia as human because of how she talks and stop their attack.
In Asimov's stories, the first and most important law of robotics is "a robot may not injure a human being, or, through inaction, allow a human being to come to harm." The Solarians found a loophole by narrowly defining who counts as human: people who speak with a Solarian accent.
Should we give AIs the benefit of the doubt?
We treat algorithmic bias as a failure of accuracy. Computer scientists in a hurry train their AIs on immense amounts of data without guidance about what kind of data it is or how to distinguish reliable data from biased or incorrect data. At worst, we blame the scientists for letting their own bias mis-program the AIs.
What Robots and Empire shows us is that algorithmic bias doesn't have to be a mistake. It can also be the product of human malice. That frightens me.
When it comes to being human, everybody counts.
Thanks for reading. See you next on Sunday, September 14.
* Image Prompt: "Create an image with a cluster of human faces of different races, genders, and ages. Include one face that is a robot but still looks human."
A little afraid of human inputs