This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.

Title

Software sucks. AI is software. Ergo...

Description

<img attachment="im_with_stupid.jpeg" align="right" caption="I'm with stupid ⬅️">The article <a href="https://freddiedeboer.substack.com/p/does-ai-just-suck" author="Freddie deBoer" source="SubStack">Does AI Just Suck?</a> writes, after providing two examples of a heavily feted AI utterly failing to create images of John Candy and Goldie Hawn, defaulting to middle-of-the-road "fat man" and "blonde woman" representations that leaves the viewer to fill in all of the gaps left by the mediocre effort. From the essay, <bq>[...] you’d think that, <b>among the various tasks you might charge an AI image generator with, recreating faces that have been photographed many thousands of times would be one of the easiest.</b> What just drives me mental about this stuff is that tons of people insist on pretending that these technologies work as intended! In the thread where these images appear, <b>there’s plenty of people who point out that they look nothing like their human counterparts, but also people going “Wow! Amazing!”</b> That’s true of so much of AI-generated art; it feels like <b>people have been told so relentlessly by the media that what we are choosing to call artificial intelligence is currently, right now, already amazing that they feel compelled to go along with it.</b> But this isn’t amazing. It’s a demonstration of the profound limitations of these systems that people are choosing to see as a representation of their strengths.</bq> I agree with this impression. There are some things that look pretty wonderful, but it's also hard to escape the conclusion that these LLM-based image-generators are good at creating generic artwork like the kind of stuff you'd have found on posters in a Spencer's Gifts in the late 80s/early 90s. The essays feel like the output of a middle-schooler or shitty undergraduate who's just trying to fill a page with text that feels vaguely relevant. There is no spark of innovation---just a frisson that it didn't completely fail, that the LLM got <i>pretty close</i>. We're already amazed that it produced a blond woman or a fat man, even if it doesn't come close to the representation that a reasonable artist could sketch in a few strokes from a handful of pictures---or even just one, as any halfway-competent caricaturist could do. Sure, it's not just random pixels, but it's also not really <i>useful</i>. <h>But what can it do for us?</h> As I've noted before, I also don't see how we get to <i>useful</i> from here---precisely because we don't know how it's even getting a generic blonde woman from the prompt "Goldie Hawn". I think it's reasonable to ask whether these LLMs are the thing we should be prioritizing right now. For me, the answer is, clearly, no. But the Lords of capitalism have determined that they can mine some short-term value from it, so we're stuck hearing about it until it suddenly implodes, washing away all value except that which has accreted to a handful of the richest people on the planet. <bq>As I will go on saying, all of this would be much lower stakes and less aggravating if people had the slightest impulse toward restraint and perspective. But <b>our media continues its white-knuckled dedication to speaking about AI in only the most absurdly effusive terms</b>, terms that threaten to exceed the power of language.</bq> This is the role of the corporate-owned media. They are an advertising arm masquerading behind a system that has the vestiges of gravitas left from a bygone age. Their role is not to inform; their role is to ensure short-term gain of value for their lords and masters. <iq>[R]estraint and perspective</iq> don't enter into it, unless it would serve that goal, which it rarely does. <h>The fallacy of the online/offline knowledge base</h> <bq>I’ve been telling people for a couple decades that <b>the attitude of “kids these days don’t need to learn facts because they have Google” is fundamentally flawed</b>, as learning facts is an indispensable part of creating the mental connections in your brain that drive reasoning.</bq> This is obvious to anyone who isn't relieved to be able to offload all of their thinking to the online advertising and propaganda machine. You can't draw conclusions if all of your knowledge is online. It's not your knowledge---you can search for anything, but you <i>have no idea what to search for</i>. You are not an interesting conversation partner, you have no original ideas, you can't innovate---because you don't have any online knowledge. Your processor might be powerful, but your memory banks are empty. Relying on LLMs for even more than we already rely on search engine for will only exacerbate this problem, will only lead to a world even more full of people who can't reason their way out of being bamboozled by state propaganda. This is not a coincidence. <h>Building things well is difficult</h> <bq>[...] what if this software just sucks? What if we’re all so desperate to move to the next era of human history that <b>we talked ourselves into the idea that not-very-impressive predictive text and image compilers are The Future</b>?</bq> That is entirely likely. Most software sucks. I find it hard to believe that software that has just appeared---has been <i>grown</i>, if you will---will be somehow better than software that actual developers have tried to design. People somehow think that it's <i>better</i> just because <i>no-one</i> understands how it does what it does. If you're the kind of person who doesn't understand how anything works, then you'll like the mystery of it because literally everything else in your world moves in mysterious ways. These kind of people don't understand even 1% of how their world works. They don't know where resources come from, where trash goes, how food can exist, how plumbing works, how <i>any</i> technology works---or why it doesn't work or can't work or why it might stop working---they don't understand biological limitations, or how chemicals and pharmaceuticals are researched and developed, they don't understand economics or politics or even basic social interations. They find it reassuring that, with so-called AIs, <i>no-one</i> understands them. They can vaguely grasp that this means that, for once, they aren't relatively stupid about a topic, as they are with everything else. In the other cases named above, they have to assume that there are smarter people out there who <i>do</i> understand how things work---and that those people are <i>better</i> than they themselves are, that those people are <i>more useful</i>. Those kinds of people are <i>not</i> reassured that we don't understand how these LLMs do what they do---because they understand the <i>scientific process</i>, they understand <i>engineering</i>, whereby one has to understand what is going on, in order to <i>improve it</i>. When you're a blithering dolt who's ignorant about everything, your approach to life is to just do stuff and hope for the best. There is no process. These LLMs are perfect for people like this. They already think these LLMs are amazing, mostly because of their ineffability, because it matches their own inability to grasp how anything works. They'll never notice that there is no predictable path forward for improvement in something that we don't understand. But, in a country---heck, a <i>world</i>---addicted to gambling and ignorance, this fact won't bother anyone. They'll think that we can just blunder our way toward improvement, calling each change <i>progress</i> regardless of viability or usefulness to anyone. <h>The real reason is always the same</h> The only benchmark will be, as always, are the richest people getting richer because of it? If yes, then carry on; if no, then change course, regardless of utility to anyone else. Are resources wasted? Is energy wasted? Is effort wasted? Could the energy, effort, and resources have been invested more effectively elsewhere? <i>None of that matters.</i> The only thing that matters is whether the handful of already-wealthy people and entities consolidate even more of generated human value unto themselves. Any other benefit is a side-effect. If that side-effect threatens the continued accumulation of capital? It will be reverted and avoided in the future. It's why we can't have nice things. Hell, you can tell people that things are getting better and <i>they will believe you</i>---especially if you tell them often enough. So, we'll probably just stay with the current, shitty crop of LLMs that our lords and masters have dubbed "AIs"---and watch them get richer, while our lives approach the minimum quality that continues to deliver value upward while avoiding revolution. As people get dumber and shittier and more egocentric as a consequence, the LLMs will actually start to seem more lifelike! So, we have that to look forward to.