|<<>>|20 of 265 Show listMobile Mode

Herzog, Žižek, and Knuth walk into a bar…

Published by marco on

The joke does not continue; my apologies. Unless the joke is that we will soon be even less able to comprehend, make sense of, or otherwise act on hypotheses about the world because we are accelerating our already advanced pollution of our information environment. What does that mean?

Actually salient information drowns in a sea of utterly meaningless noise. It’s been this way for a while, if you’ve been paying attention. Social media was the first booster rocket taking us further away from being able to influence our societies in any way that does anything to even think of negatively affecting the profits of our elites or the stranglehold they have over any and all levers of power.

You can see it in the shocking lack of information many people have about how the world works, or about any current events. But let’s go back to something with a bit more levity instead of focusing on the doom and gloom.

A friend sent me the site The Infinite Conversation, which is an AI-produced,

“[…] never-ending conversation between Bavarian director Werner Herzog and Slovenian philosopher Slavoj Žižek. When you open this website, you are taken to a random point in the dialogue. Every day a new segment of the conversation is added. New segments can be generated at a faster speed than what it takes to listen to them. In theory, this conversation could continue until the end of time.”

Part of the joke is that this AI product is so niche that it is utterly harmless. More people will recognize Herzog, but very few will recognize Žižek. I’m a fan of both[1].

It’s kind of sobering how realistic it sounds at first. You have to really follow along to tell that it doesn’t make much sense (i.e., mine had Herzog saying that his dream was to make films in the jungle; he’s already made several). And Žižek’s sentence wasn’t even grammatically correct – but that doesn’t mean it wasn’t Žižek. 🙃

That it makes no sense to the trained or expert ear is actually quite normal for AI-produced output. What AIs produce is usually kind of generic—like an undergraduate essay—or just outright incorrect. You kind of already have to know the answer if you want to be able to use it. People generally ask these things the kind of questions that it can’t hope to answer, but which they fervently hope will provide them with some sort of insight into how to proceed in their own lives. I see these AIs more as oracles or tea-leaf readers. You will get out of it what you interpret from the vague answers that they deliver.

We’ve always dealt with the possibility (and absolute reality) of fakeness in our information environment. It just used to be more difficult to produce it en masse. Now, we can spew out a literal TON of noise to signal, drowning out any hope of understanding our underlying physical reality even further. Now, when we would need most to understand what is happening, and now, when we are responsible for making decisions that will impact generations—if not the species—we are more befuddled than ever…and couldn’t really care less.

The largest misinformation campaigns go largely unignored, because they are official ones. There is the cult of Russiagate, which has poisoned nearly all thought not only in its country of origin, but it has also severely infected the mental hygiene of otherwise rational people in allied countries. That complete fabrications laid the groundwork for a renewed hot war with Russia is much, much more dangerous than these AI infractions, which are tiny in comparison (so far).

Eminence grise Donald Knuth documented his (indirect) interaction with ChatGPT (3.5, I believe) in a text file. His conclusion?

“I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy.”

Sounds good.


[1] Not an unquestioning one, of course. Žižek’s complete lack of nuance on the Ukrainian/Russian war has given me pause of late.