The End of Filter Failure?
How soon will technology start working for users rather than big tech companies when it comes to information overload? (Issue #111)
Before we get to today's main topic, some miscellaneous goodies and things worth your attention…
Watch your feeds for an audio version of The Dispatch!
Wired has a smart piece about how Generative AI doesn't have to steal copyright in order to learn. (H/T Michael Estrin.) My question is whether LLM's hunger for public domain works will finally be the thing that changes copyright law in this country. If the AI companies have bigger market caps than the Hollywood studios that have spent decades lobbying Congress for more and more favorable terms, might we see more works enter the public domain more quickly?
Apple may license Google's Gemini AI technology for its iPhones, but what does this mean for SIRI?
Speaking of Apple, WSJ's Joanna Stern has this important piece about how and why to turn off auto-sharing on the new Journal app.
A new poll shows that Republicans who don't watch Fox News (and presumably NewsMax and OAN) are less likely to vote for Trump. Um... no duh?
A Big Week in Medical AI: This piece by Eric Topol about how AI will transform health care—in the near future—is a bit technical but a must read for anybody interested in how soon "the AI will see you now" is coming.
The live-action Avatar: The Last Airbender on Netflix is so, so good. We have a full nest for Spring Break, and Clan Berens is loving this adaptation of a favorite animated show from my kids' childhood. Nota bene: The live action version is not for kids under 10.
Glasses in the Emerald City: If you didn't understand the title of last week's microfiction, it comes from Chapter 10 of The Wonderful Wizard of Oz.
Hire me to speak at your event! Get a sample of what I'm like onstage here.
Please follow me on Bluesky, Instagram, LinkedIn, Post and Threads (but not X) for between-issue insights and updates.
And now, a message from this issue's sponsor:
This week's Dispatch is brought to you by Insights Exchange, a Global Research as a Service (GRaaS) company that creates high quality, fast, and competitively priced research (both quantitative and qualitative) to accelerate your business success. The Insights Exchange team of highly experienced professionals conducts research to help you:
Understand your customer
Create new touch points with current/future clients
Define TAM, SAM, and new product adjacencies with high potential
Extend the capabilities of your Insights team
Or function as your on-demand Insights team
To learn more, visit us at www.insights-exchange.com or email us at info@insights-exchange.com.
On to our top story...
The End of Filter Failure?
Last time, I shared a microfiction (1,000 words or less), a short science fiction story called "Fleeing the Emerald City," about Calvin, a man who uses advanced filtering technology to lose weight but doesn't much enjoy the experience. I also recorded an audio version of the story.
This time, I'll dig into how realistic the story world is or isn't. You don't have to read or listen to “Fleeing the Emerald City” (although it ain’t bad) to understand this week's piece, but fair warning: Ahoy! Thar Be Spoilers Ahead!
Let's dig in.
In the story, Calvin subscribes to a service called Nudgetekk that provides smartglasses with connected earbuds to make it impossible for Calvin to see or hear about unhealthy foods. In Calvin's local supermarket, the smartglasses connect to the inventory in the store so that the Nudgetekk AI knows which aisle has the Oreos and then steers Calvin down another aisle. If Calvin blunders down an aisle containing sugary or fatty temptation, the smartglasses blur the packages. If an ad for Ben & Jerry's Chunky Monkey blares over the supermarket loudspeakers, the Nudgetekk earbuds filter the ads out.
How realistic is this? Not very at the moment, but quite realistic in the middle-distance future—say, five to 10 years out.
Ad filtering is not a new idea. In his 1985 novel Contact, Carl Sagan had an aside about a service called adnix that would simply mute the TV whenever a commercial would start. Today, many people use adblockers (Ghostery, Privacy Badger, AdBlock) to suppress online ads. But the ad blockers work in controlled and predictable environments (web browsers, smart phones) rather than in the real world.
The technical challenge is overlaying a filtering technology onto the real world around us in real time. We've had versions of Augmented Reality for years now (Google Glass was the earliest famous one), but these days smartglasses (also called Heads Up Display or HUD) add information to the environment around you rather than subtract it. If you use Snapchat, then you have probably played with the overlays—like this one that sticks a possum on top of my head:
The most useful overlay example is GPS navigation: instead of looking away from the road to see a map on the dashboard, with HUD the navigation is on top of the road, and a big arrow points at the freeway exit you need to take. (Without, I hope, the "this one, you idiot!" commentary.)
Digression: In this piece, I'm ignoring other well-known challenges with HUD like battery life (nobody wants to stop everything to plug in their glasses five times per day) and the nausea problem that bedeviled early VR. End of digression.
In order to filter out visual information in real time—while Calvin is walking down the cookie aisle in the supermarket—the smartglasses would first need to recognize that information. This involves both a) machine vision (computers seeing and understanding things) and b) Edge AI (computation happening on a device rather than in the cloud). Even if Calvin's supermarket had ultrafast wifi, which is unlikely, data going upstream is typically slower than downstream, so without AI built into the smartglasses the blurriness couldn't happen in time for Calvin to avoid the temptation of Double Dark Chocolate Milanos. (Is anyone else getting hungry?) He'd see the package, and only then would it blur—a latency problem.
Also, I cheated. The supermarket example is complex. It requires a store that knows the layout of its own inventory (the Amazon Go stores do this to an extent) and can share that information with another AI powered device (the smartglasses) in real time. Even if we dismiss all the bandwidth and Edge AI challenges, this is still a much simpler filtering problem to solve than wandering through a store that doesn't have digitally tracked inventory.
One perhaps useful analogy is with Shazam, the "what is the song?" technology for the hopelessly distractable. When you just gotta know who did that intriguing metal cover of "Eleanor Rigby" (Godhead) and grab your smartphone to stab the Shazam button, the program doesn't recognize the song the way a human does. Instead, it captures a sample recording and then rapidly compares that sample to everything in its database. It's a brute force computational exercise that identifies a recording rather than a song.
In contrast, Google's newer "Search a song" technology lets you sing or hum a song into your Google smartphone app, which then tells you what the song might be and links to different recordings. It's amazing. This is machine learning versus brute computation. To my surprise, "Search a song" failed to identify the famous opening bars of Beethoven's Fifth Symphony but had no trouble figuring out when I was humming "One Week" by Barenaked Ladies.
In the story, the Nudgetekk glasses are like Shazam rather than Google's "Search a song." It would be a much bigger technology hurdle for an AI to filter out unhealthy food stimuli in the wild... today.
But what about five years from now? Let's say the computational power of AI doubles each year (a conservative estimate by many accounts). That means that in five years AI will be 16x more powerful than today's already magic-seeming programs. By that point, the ability of smartglasses to filter out images and sounds in the wild will be much stronger, if not seamless.
Why does any of this matter? In a famous Web 2.0 keynote 15 years ago (remember when we talked about "web 2.0" like it was a thing?), Clay Shirky argued that we digital humans weren't really suffering from information overload. Instead, we were experiencing filter failure. Earlier, in 1971, the polymath Herbert Simon wrote, “A wealth of information creates a poverty of attention.”
Since the start of the digital revolution and particularly since the explosion of social media, the users (that's us) have been bringing squirt guns to an attention war where the tech platforms have nuclear weapons.
Now, for the first time in decades, technology is trending towards empowering users with smart filters that will help them achieve their goals—instead of watching hours disappear into TikTok because of a deliberately addictive algorithm. That's the good news.
The bad news is that AI-powered smart filters might also empower users to stay even more firmly inside their bubbles: and if you think we're polarized now just imagine what it will be like when we can entirely avoid opinions that don't match ours. We won't even have to change the channel!
Every new technology is a double-edged sword.
Thanks for reading. See you next Sunday.