3 Comments

Jaron Lanier once said that what he didn’t like about Google was that it requires us to alter ourselves unnaturally in order to get out of it what we want. He said we “make ourselves stupid so the machine looks smart.” That’s not unlike what generative AI is like, at least for now. That said, as a contributor to the interplay of variables in a multilectic, the product of generative AI can play a role. Its value in producing basically cogent narrative search results, for instance, is obvious. But that undefinable quality that much time of practice at crap has produced is what it is missing. Another writer said it lacked “elan,” a good word that isn’t used often. Will generative AI manifest elan with time? Unlikely. Because the machine doesn’t seek pride or satisfaction or glory or praise or any sense of accomplishment; all of which contribute to a creator’s understanding of the quality of their output and to the creator’s vision of their goal. The machine just is. Like those computers that win at chess: to win a game it has to know it’s playing a game. But it doesn’t, so we make ourselves think it’s playing a game so that we can then say it won. We make ourselves less so that the machine looks like it’s more.

Expand full comment
author

Hi Jim, thanks for reading!

That's both cogently and profoundly said--on your part as well as Jaron's. It reminds me both of 1) Orwell's idea that the white man puts on a mask and his face grows to fit it ("Shooting an Elephant") and 2) Steve Knapp's observation that in order for literary theory types to have the big "ah ha!" surprise reading where a classic text unravels they must read in a very flat, unimaginative way.

I've never said that generative AI is without value... merely that we shouldn't focus so much on what we gain that we lose sight of what we lose... and the artisanal crap and, we hope, resulting learning from the creation of that crap is one of those things threatened by generative AI.

Also, I wonder if what we're groping towards is a distinction between what can be defined and what can be computed? We tend to elide any difference between the two, but we can know what something is without necessarily being able to compute that thing. If you know your William James, then this is the "squirrel going round the tree" distinction in Lecture 2 of "What Pragmatism Means." (I recommend it to you if you don't... and it's freely available at Gutenberg.)

Again, thanks for reading. And for commenting. They both mean a lot to me.

Sincerely,

BB

Expand full comment

Hi Brad,

Gail Collins and Bret Stephens' Monday dialogue in the NYTimes, Nellie Bowles's TGIF in the Free Press, and your newsletter are the only things I reliably read every week. I never finish without something new to think about. Btw, loved Glass Onion. The cameos, both physical and by mention (Jeremy Renner's small-batch hot sauce has had me laughing out loud for days...), are fabulous.

I didn't mean to suggest that what you were suggesting is generative AI is without value; just calling out what value I think it currently has in the absence of a value being put forth. I think I stand on your side with regard to at least the current mode of generative AI, and likely even the future mode of more complex versions. I do think we are fumbling towards a distinction between what can be defined and what can be computed. And I think that that's a matter of meaningful difference despite an indiscernible difference in observable outcomes, sifting among the lessons of James' squirrel, and that the difference is in how one defines the arrival to the outcome. A kind of "it depends" scenario.

That which can be counted is not always meaningful and that which is meaningful can't always be counted. Meaningful can be defined -- if not definitively! -- while not being computable. Like all poodles being dogs but not all dogs being poodles: if it can be computed it can be defined; if it can be defined it is not necessarily computable. What we can count, what we can compute, are immobile. We compute things; harder to compute action. Easier to talk about thoughts, harder to talk about thinking. The striving for definition -- defining -- is the crap of craft I think you are talking about that produces the object/act/moment of quality. "What will the Spider do/Suspend its operations..." Like James' does-the-man-go-around-the-squirrel definition depending on HOW one defines, (which can be extended to the context in which one is defining) definition is the goal, but it is a horizon that constantly recedes even if it can be articulated in a way other minds understand (that great craft can be a means to look into other minds, to access subjective experience, is a subject worth exploring when tussling with definition of AI). THIS is what's missing from AI... understanding. Like Searle's Chinese Room, the pattern recognition and output is so fluid and seamless it simulates understanding, but doesn't possess it. It can reliably regurgitate epistemically objective input ("Van Gogh was born in the Netherlands") but it cannot reliably render epistemically subjective output ("Van Gogh was the greatest post Impressionist"). How we are defining the movement toward better and better simulation of representations of human activity is the struggle. The contest looks like it's shaping up to be between computationalists and everyone else. The years of trying to render human activity into machine readable form has led to a general preference for a machinistic articulation of humans themselves. My fear is the computationlists will win.

Jim

Expand full comment