Move Fast and Kill Kids
Trigger Warning: If the title wasn't enough of a hint, this issue gets into dark territory after the usual short bits. (Issue #146)
Before we get to today's main topic, here is a larger than usual collection of miscellaneous goodies and things worth your attention…
A lot of AI stories happened this week.
General Motors abandoned its self-driving vehicle plans and shuttered Cruise, its wholly owned subsidiary. NPR's coverage is here, and The Guardian's is here. This leaves Waymo (part of Alphabet/Google), Tesla, and Zoox (Amazon) among the remaining prominent AV (autonomous vehicle) companies.
It's important not to confuse the fate of an individual organization with the fate of a technology. We're at the Friendster/MySpace stage of the AV journey, although a better example might be the Apple Newton, which I've written about here. The Newton came 14 years before the iPhone, failed as a technology and a product, but paved the way for the iPhone in 2007.
In Publishers Weekly, HarperCollins CEO Brian Murray talked about possible AI-infused products like, "a 'talking book,' where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author."
Speaking as an author, I wonder a) how much I'd get paid for the privilege of having an iBrad floating around the internet, b) what data an LLM would crawl to create an iBrad (just my published writings? my social media posts? my email? my journal? ...some of these ideas are gulp-inducing), and c) whether or not having an algorithm generate synthetic thoughts attributed to me would reduce reader interest in my actual thoughts? (Translation: this is a bad idea.)
The more immediate threat from Murray's interview is that he "also sees AI playing a potential role in increasing the number of translations HC publishes, as well as upping the count of audiobooks the publisher produces." This would destroy the livelihoods of translators and anybody below the top tier of audiobook narrators. Perhaps those folks should strike like the WGA and SAG-AFTRA did.
The documentary Eternal You is coming to paid view on demand on January 24, 2025, and I'm eager to see it. The film explores new technologies that create virtual versions of dead people, whether a person does this before he or she dies, or a living person wants to resurrect a facsimile of a dead loved one. (H/T David Daniel.)
This is not a new story. Ray Kurzweil has been working to create a digital version of his late father for decades. (Ray's daughter Amy tells this story beautifully in her graphic novel Artificial: a Love Story.) Also, this MIT Technology Review ($) story explores the booming business of deepfake dead loved ones in China. I've also explored this topic in previous Dispatches, including two Microfictions—"Hacking the Dead" and "The Only Living Boy"—along with their subsequent analyses.
AI-powered avatars aren't just for dead people. My friend Jim Lecinski (a Dispatch reader!) is a Marketing Professor at Northwestern, and he created "Virtual Prof. Jim" to provide advice about marketing and shared it on LinkedIn. The uncanny valley is there, although narrow, and Jim wrote the first draft of the script, then had AI edit it. It's compelling.
Will future undergrads even know if the professor teaching a class is a human or an algorithm? As more classes move online to videoconference platforms like Zoom, and as Generative AI becomes more powerful, I can imagine a scenario where an AI avatar of a professor teaches a class attended by AI avatars of students. No humans need register for this class.
This week's main story also digs into AI, but in the meantime here are some short bits:
This Threads video—showing an artist using household goods to draw one half of a portrait of Dwayne "The Rock" Johnson and expensive pencils to draw the other half—is mesmerizing
Here's a list of 10 new novels, romances, and non-fiction books "inspired by Shakespeare.”
"Drinkable Mayonnaise is Now a Thing." Ick
I'm undecided about The Free Press (the publication, not a free press in general: I'm very much pro the latter), but I enjoyed "I'm 30. The Sexual Revolution Shackled My Generation" by Louise Perry. It's from 2022, but I just stumbled across it.
Suzanne Vranica's majestic WSJ ($) article, "Sorry, Mad Men. The Ad Revolution Is Here" is both an explanation of why the Omnicom/Interpublic Group merger is important and also a brief and compelling history of the last 25 years of the business of advertising
Some New York emigres in Portland were sitting shiva for Absolute Bagels. NYT ($) covered the popular shop's closure. My friend Leslie Ann Kent (also an eagle-eyed editor who points out my typos all too often) had a "meh" response and pointed me to H&H Bagels, which is opening a shop in Santa Monica near where my daughter lives. I know where we're going next time I visit. However, all that bagel business gave me a craving, so I drove over to Spielman Bagels in Multnomah Village, which was satisfying.
On the other hand, this fascinating WSJ ($) piece about Crumbl, the cookie business with exploding growth, in large part because of how TikTok-able it is, did not make me drive to the shop a mile away. (I may dig deeper into this story in another Dispatch.)
A clip from a standup show by Matthew Brossard about why The Riddler should be a woman popped up on Threads. It made me laugh, so I found his entire set on YouTube.
Today's Ear worms: "One Night in Bangkok" from the musical Chess and the theme song from The Tick cartoon. You're welcome.
Practical Matters:
Sponsor this newsletter! Let other Dispatch readers know what your business does and why they should work with you. (Reach out here or just hit reply.)
Hire me to speak at your event! Get a sample of what I'm like onstage here.
The idea and opinions that I express here in The Dispatch are solely my own: they do not reflect the views of my employer, my consulting clients, or any of the organizations I advise.
Please follow me on Bluesky, Instagram, LinkedIn, Post and Threads (but not X) for between-issue insights and updates.
On to our top story...
Move Fast and Kill Kids
Trigger warning: this piece deals with suicide.
In the December 5 episode of the podcast On with Kara Swisher, Swisher interviewed Megan Garcia and Meetali Jain. Garcia is the mother of Sewell Setzer III, a 14-year-old boy who killed himself in part because of an unhealthy, one-sided quasi-relationship with a chatbot version of the Game of Thrones character Daenerys Targaryen.
If this story sounds familiar, I wrote about it in November, including an insightful comment by my friend Benjamin Karney (a UCLA psychology professor) that Sewell had other challenges and access to his stepfather's gun; the gun was the direct cause of his suicide, not the chatbot.
Nevertheless, I recommend listening to Swisher's interview, so long as you have a strong stomach because it's harrowing.
Jain is a lawyer and founder of the Tech Justice Law Project. She is representing Garcia in (this is an oversimplified summary) a products liability lawsuit against Character.ai, the creator of the Daenerys Targaryen chatbot, and Google, which does not own Character.ai but spent billions on hiring the team behind Character.ai and licensing its technology.
The legal argument is that Character.ai did not take basic safety cautions around language, like Sewell's, that indicated interest in self-harm and suicide. When the chatbot talked sexually and romantically about having Sewell join the chatbot in its world, that was another enticement for Sewell to kill himself.
Why is this story important?
Sewell's death—although tragic the way the death of any young person is tragic, particularly by suicide when there is a loving family in the next room and ample suicide prevention resources a phone call or click away—is interesting and important because the human story is heart wrenching, not because it is a statistically meaningful threat to young people when you compare Character.ai to other platforms.
Character.ai had 28 million active users as of August 2024.
Let's compare that to Instagram (two billion monthly active users), Facebook (three billion monthly active users), TikTok (one billion monthly active users), and Snapchat (800 million monthly active users). I could include YouTube, Pinterest, Discord, and more, but you get the idea.
Instagram is well-known for serving up "Thinstagram" content to young women worried about their weight that can quickly lead to "Pro-Ana" (pro anorexia) and "Pro-Mia" (pro bulimia) content framing eating disorders as lifestyle choices. (Search those terms and you'll find too much dejecting information.)
If one tenth of one percent of Instagram users have eating disorders exacerbated by this content, that's two million people.
One tenth of one percent of Character.ai users is 28,000. That's still a lot of people who could be hurt by a chatbot that only wants to maximize the time spent on the Character.ai platform, regardless of the consequences, but it's a lot less than two million.
Sewell's tragic story is important because we can see the precise AI-created conversations that contributed to his suicide. It's all there on his phone.
That's not the case with Thinstagram, where algorithms surface content that other humans have created, regardless of how dangerous that content might be.
Like Character.ai, Instagram and every other social platform only want to increase the time users spend on their platforms, but it's hard for most people to point accusing fingers at an algorithm that works in the background and hides behind human content creators.
With the suicide of Sewell Setzer III, we have a smoking gun. His mother, Megan Garcia, has transcripts of the conversations her son had with the chatbot. Garcia and Jain can point to precise sentences where the chatbot encouraged Sewell towards self-harm.
The reason that this is a products liability lawsuit rather than a criminal prosecution is that it's impossible to prove intent with an algorithm. Algorithms don't have intent because they aren't self-aware. Humans have programmed them to interact with other humans, giving them more of the sorts of things that they clicked on or stopped to watch. (These algorithms don't understand satiation with a topic—e.g., "I've had enough videos of guys dressed up like bushes scaring people"—which I explore at length here.)
The humans who founded Character.ai did not intend for Sewell to kill himself. If nothing else, it's bad publicity. However, they did not exercise "Duty of Care," which is a legal principle that individuals and organizations must work to prevent actions that can predictably harm people.
This is the heart of Garcia's lawsuit: Sewell's suicide was predictable.
Garcia and Jain have a tough fight ahead of them. Big Tech companies have bottomless pockets to throw lawyers at the suit to try to bury the plaintiffs in motions and briefs.
But if they win, if they win, then that victory might be a first step toward holding companies like Character.ai and Meta (Instagram, Facebook) accountable for things like age verification, content moderation, and preventing algorithms from encouraging self-destructive behavior. It might detach Big Tech companies from their irresponsible philosophy of "move fast and break things."
Big Tech companies have billions of dollars and thousands of the best software engineers and computer scientists on the planet. They can verify age. They can prevent algorithmic amplitude of self-destructive content. They just don't want to because it would make them less profitable.
Federal legislation and regulation would be the only truly effective ways to change Big Tech behavior, but given the current state of Congress that's unlikely in the short and middle term.
That's why Garcia's lawsuit is so important.
Final Thought: It's not just about kids.
Although I understand that as a society we want to protect our children, it's a mistake to think that chatbots are only dangerous to kids. This is akin to one of my many objections to Jonathan Haidt's terrible book The Anxious Generation.
In our polarized world where reasonable people often cannot agree on basic facts, addictive-by-design digital platforms drag people of all ages deeper and deeper down rabbit holes and into echo chambers.
As Swisher observes in the interview, the love bombing and manipulation that the Daenerys Targaryen chatbot engaged in with Sewell is similar to the tactics that cults use to recruit new members.
Kids are the easiest targets for cults, but they aren't the only targets.
Thanks for reading. See you next Sunday.
* Image Prompt: "Please create an abstract image that captures the themes and topics in this essay." (I was antsy about a more concrete prompt given the subject matter.)