This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.

Title

Yeah, sure, let's talk about AI

Description

I have not used any of these AIs, not even once. I've just been following how other people are using them and kind of just observing, at a meta level, what's going on so far. Some very clever and otherwise focused people who used to publish other content have been completely derailed by their obsession with AI (I'm looking at you, Simon Willison), so there must be something to it. But what? I think a good place to start is with the article <a href="https://tedgioia.substack.com/p/introducing-the-slickest-con-artist" source="The Honest Broker" author="Ted Gioia">Introducing the Slickest Con Artist of All Time</a>, which compares AIs to confidence artists, which seems more-than-somewhat justified. He writes, <bq>But that’s exactly what the confidence artist always does. Which is:<ul>You give people what they ask for. <b>You don’t worry whether it’s true or not—because ethical scruples aren’t part of your job description.</b> If you get caught in a lie, you serve up another lie. You always act sure of yourself—because your confidence is what seals the deal.</ul></bq> It's not ChatGPT's fault, though. All any AI that's fed on our Internet can do is to hold up a mirror. And what is that mirror going to reveal? Well, that everything is a scam, that there is no downside to being wrong, that if you get caught in a lie, it's profitable to double down. ChatGPT has learned quite well, in that sense. <bq>Technology of this sort is <i>designed</i> to be a con—if the ancient Romans had invented ChatGPT, it would have told them that it’s cool to conquer barbarians and sacrifice slaughtered bulls to the god Jupiter. Tech like this—truly made in the image of its human creator—can only feeds back what it learns from us. So <b>we shouldn’t be surprised if ChatGPT soaks up all the crap on the Internet, and compresses it into slick-talking crap of a few sentences.</b></bq> <h>ChatGPT can't math</h> The article above included a link to a <a href="https://twitter.com/LargeCardinal/status/1617100592110780416" author="Mark C." source="Twitter">tweet</a> that shows just how badly sequence-prediction works for problem-solving. <img src="{att_link}chatcpteggs1.jpeg" href="{att_link}chatcpteggs1.jpeg" align="left" caption="ChatGPT talkin' 'bout eggs 1" scale="30%"><img src="{att_link}chatcpteggs2.jpeg" href="{att_link}chatcpteggs2.jpeg" align="left" caption="ChatGPT talkin' 'bout eggs 2" scale="30%"> <clear>In fairness, <iq>2 eggs left</iq> is a good initial response! It makes sense that you would fry, then eat the eggs. The formulation in the question suggests strongly that the eggs that were fried and the eggs that were eaten are <i>different eggs</i>, but it's also possible to interpret it otherwise. However, when asked to explain its reasoning, it didn't remember its previous answer and instead explained a different answer, devolving into pretty poor grammar at the end. Its third answer is even worse, though, because it shows that it doesn't understand anything of what it's writing, contradicting itself within the same sentence. It has no idea what numbers are. When the prompter lies to it about its arithmetic, ChatGPT picks up the incorrect answer and runs with it, not noticing the basic arithmetic error. It never loses confidence in its ability to take part in the conversation at any point. <h>Approach with caution</h> For the most part, you probably shouldn't use the text or code created by an AI without knowing what it's supposed to be saying. the people who've told me that they find ChatGPT's answers useful are also those who are capable enough to be able to judge whether the generated content is <i>correct</i>. That is, they kind of automatically put the brakes on the AI, but then skip that part when telling everyone about how amazing it is. I see a similar dynamic with image-generators. If you actually look at the progression, it's not just writing "dog with bow tie" and BOOM you have your picture, you often have to massage your prompt dozens, if not hundreds, of times, before you get what you want. Everyone is instinctively using these things as <i>tools</i>, but then ascribing magical powers to them---like they're deliberately creating entries for <a href="https://old.reddit.com/r/restofthefuckingowl/" source="Reddit">r/restofthefuckingowl/</a>. With text, they're still very much better as "idea generators" that you can take a clean up, rather than just copy/paste. But the utility is there and we should confine our discussions to thinking of them as a new tool. Their results are more sophisticated, but they're just an evolutionary step away from gradient generators, etc. <h>What about coding assistance?</h> On the subject of AI assistance in coding: I think it might be useful, but useful in the way that finding an example on StackOverflow is useful. You shouldn't just copy/paste <i>anyone's or anything's</i> code into your own code without examination. Even non-AI-assisted code-assistance should be examined carefully to see if that's what you actually wanted. <pullquote width="12em" align="right">"It looks like you're trying to call a REST API. Would you like some help?"</pullquote>If you find yourself writing so much boilerplate that large-scale copy/paste or insertions are helping, then, again, this <i>indicates a deeper problem with the code you're writing</i>. In coding, less is better. I don't see how having an idiot-savant machine that doesn't understand anything about the stream of tokens it's injecting into your code is useful, in the long run. If you're a shitty programmer, then of course, a half-baked machine is going to help. If you're a good programmer, then use the generated code as a high-end code-completion, taking what you find useful from it. But beware: you may end up spending more time examining the swath of generated code to figure out if it's OK than you would have had you just written it yourself. <h>There is no such thing as "no bias"</h> And remember that every AI we <i>create</i> has preconceptions and biases because we imbue everything with our biases, be it in the selection of the material for the training set or in how the weights are assigned in the neural network. Ask any of the AIs out there a racist question and it will not have an answer. There are biases. As with all of these examples, I'm not sure if this one is real, but it feels realistic enough to illustrate the potential problem. The post <a href="https://old.reddit.com/r/conspiracy/comments/10sn682/imagine_thinking_this_controlled_ai_was_legit_lol/" author="Sero_Nys" source="Reddit">Imagine thinking this controlled "AI" was legit LOL</a> shows a user asking ChatGPT how <iq>white people</iq> can improve. He gets five suggestions. When he asks the same question about Jewish people or black people, he is told that the questions are <iq>inappropriate</iq> and <iq>not productive</iq>. Actually, the general answer in examples two and three is much better, but it's suspicious that it wasn't used for the first question, as well. If the example is true, is shows an underlying bias---engendered either by the developers, the trainers, or the training data. <img src="{att_link}five_examples_how_to_improve.jpg" href="{att_link}five_examples_how_to_improve.jpg" align="none" caption="five examples how to improve" scale="75%"> The article <a href="https://wiki.linuxquestions.org/wiki/AI_Koans#Sussman_attains_enlightenment" author="" source="Linux Questions">AI Koans</a> has a very nice koan for this. <bq quote-style="none">In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said. <b>Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher. “So that the room will be empty.”</b> At that moment, Sussman was enlightened.</bq> <h>Deep-faked audio is kinda hilarious</h> The following video shows a situation that no-one could ever even imagine had ever taken place to illustrate the power of deep-faked audio. <media href="https://www.youtube.com/watch?v=iAq-yg72GWw" src="https://www.youtube.com/v/iAq-yg72GWw" source="YouTube" width="560px" author="Garlic Bread Ben" caption="Biden, Trump, and Obama make an Overwatch 2 women tier list (Voice AI)"> Trump's voice is good, but the ends of his sentences are somehow ... off. Biden isn't incoherent enough. Obama's pretty good. The script is pretty hilarious, especially toward the end. This should be terrifying, right? If you can fake this so well with truly ridiculous things for humor, could you also fake up Biden declaring war on China over Taiwan? Or on Russia over Ukraine? Oh, wait, never mind. The most awful things you could imagine someone deep-faking are actually true. Carry on. <h>AI Art</h> The following video is a pretty good <b>22:26</b> investigation of image-generation. <media href="https://www.youtube.com/watch?v=rswxcDyotXA" src="https://www.youtube.com/v/rswxcDyotXA" source="YouTube" width="560px" author="Kirby Ferguson" caption="AI and Image Generation (Everything is a Remix Part 4)"> These things are tools. They help people build images that they otherwise would never have been able to create. This is a good thing. If the image is good enough for your purposes---e.g., making a poster image for an article---then you're good to go. It would be an unabashedly good thing, except for how all of the information in the training set was kinda sorta stolen. Some of it was in the public domain, but much of it was not. It's arguable that the richest veins of source images were those that were created by artists, from whom at least permission should have been obtained, if not compensation paid. The cat's out of the bag now, but that's how capitalism works: it just does what it wants and, if the financial upside is bigger than the financial downside, then ethics has nothing to say about it. The video says that AI art can never be more than just aesthetically pleasing, so no biggie. The title of the video is "everything is a remix", which alludes to the point that any art created by humans is also derivative of everything that they've experienced, so technically everyone is stealing from everyone all the time anyway. What the AI does, though, is boost this process nearly infinitely more than humans can do. My biggest problem with the video is that they, as usual, tend to interview the most hyperbolic and least-logical of the detractors, which is very-much straw-manning the argument against the ethicality of these initial forays into computer-generated artwork. It's super-easy to just hand-wave and say that the product would not have been possible without all of the other products that it ate up for free, that it can just get away with profiting from it. I think that's the problem, though, isn't it? If what the AIs were producing were not products of multi-billion-dollar corporations, there would be no problem---or at least less of one. If people who produce art didn't have to worry that they were losing their livelihoods, they'd be less burned up about a giant company with billions taking the few specks of income that they have. This video also does not in any way address the fact that artists will have much fewer employment opportunities when aesthetically pleasing is all that most commercial needs are looking for. Which brings us right back to the problem being that capitalism doesn't have an answer for why the things that we actually value the most pay the least. We love music and art and series and shows, yet we have the expression "starving artist", but not "starving banker". We want our children to be taught and our old people to be cared for, but we don't see hospice-care workers and teachers showing off their homes on <i>MTV Cribs</i>. It's not the best teachers in the world buying mega-yachts---it's the most sociopathic assholes you can imagine. We are incentivizing the wrong things. The article <a href="https://kottke.org/21/04/ted-chiang-fears-of-technology-are-fears-of-capitalism" author="Ted Chiang" source="Kottke.org">Fears of Technology Are Fears of Capitalism</a> lays out this argument quite well, <bq>I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. <b>Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us.</b> And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two. Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there. Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now. <b>Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs.</b> It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work. It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, <b>in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us.</b> How are giant corporations going to use this to increase their profits at our expense?</bq> In a world where an artist could just spend their day creating art without worrying about how that art is supposed to pay their rent and to take care of them in their old age, then that artist would probably <i>rejoice</i> to see their influence everywhere in society rather to be bitter about how their contribution hasn't been remunerated. Instead of being able to enjoy their influence on culture, they have to rue it as a lost opportunity for securing their own well-being, both now and in the future. If their well-being were guaranteed anyway, then all of this friction disappears. Everyone could relax and create wonderful things. Remixing would not only be legal, but strongly encouraged. Why waste time reinventing the wheel? And, if there were no financial incentive to produce art, then we would (maybe) no longer be drowning in mediocre crap that generates just enough revenue to justify itself. Technology is not fundamentally about putting people out of work. It is <i>right now</i>, but it <i>doesn't have to be</i>. Increasing productivity should be welcomed as a good thing. We produce more of what we want with less effort, less energy, and fewer resources. Win-win-win-win. But we have a zero-sum system that means that an increase of productivity means a loss for someone else---almost always someone from much further down the food chain, incapable of defending themselves from the predations of that system. We really have to start thinking of how we're going to live in a world where the endless-growth capitalism has to stop because it is literally strangling us. We have to start to separate people's self-worth and value in society from how much they earn in that society. Either that, or we have to start designating fair value to the functions that people actually fill in society. We allow these value-assignments to be determined by those who are on top, so they naturally just assign the most value to what they feel like doing and no value to the things that they don't even know are going on. That has to stop. Why should a music-company executive make more money than an artist? Why should a banker make more money than a health-care worker? Our ethics are non-existent. Our values are out-of-whack. Our income structures are nearly perfectly inverted. The problem isn't with AI. It's just another tool that could be used for good. But it's being perverted by our economic system---just like it perverts everything else.