This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.

Title

Mo Gawdat talks to himself again

Description

I've watched Mo Gawdat before (see <a href="{app}view_article.php?id=4773">Mo Gawdat discusses AI</a>). He is an acquired taste, at least for me. There is good in what he says, but it is interspersed with a lot of wild and unsubstantiated statements that he hopes you'll believe because he's <i>so smart</i>. The listener is left wondering whether they don't see the through-line on what he's saying because he's skipped a bunch of steps that his unparalleled genius didn't see as necessary or wether he's just pulling a fast one.<fn> This is the video: <media href="https://www.youtube.com/watch?v=fDHvUviV8nk" src="https://www.youtube.com/v/fDHvUviV8nk" author="James Laughlin" caption="URGENT: Ex-Google CBO says AI is now IMPOSSIBLE to stop with Mo Gawdat" source="YouTube" width="560px"> And these are my cleaned-up, more-or-less, stream-of-consciousness notes I took as I listened to this video. The interview starts off with a warning by the clearly overexcited host that the topics that will be discussed are so transgressive that you might be triggered by them. OK, sure. Whatever. Then, there is the by-now familiar Mo Gawdat introduction where he talks about writing an entire book in nine days because his mind is so organized and his <i>CHI</i> is <i>SO FLOW</i> and he uses silence as a <i>fucking weapon</i> and he doesn't waste time being like those other high-powered billionaire executives who are always chasing the cheese in the maze...but then he says things like, <bq>One of my best, best friends is Gelong Thubten, <b>who's one of the top monks of the UK.</b></bq> What in the hell does that even mean? Is there a FIFA-style ranking for monks? I wouldn't mention it but, for me, it reveals that his mindset isn't <i>quite</i> where he's like to have it yet, I think. But hey, no problem, life is aspirational. Talk about the thing you want to be until you are that thing. Because what he is advocating is, in general, pretty good. But it also seems like it best applies to those who no longer have to worry about any worldly needs, those who have achieved financial orbit. Because <i>not</i> following that advice is what made him a hyper-millionaire in the first place. For those who aren't in that enlightened post-capitalist place---i.e., in the way that he's used capitalism to escape capitalism---the advice may ring a bit hollow. Also, the dude is wicket smart, and it's often the case that smart people can't quite see why other people don't just try harder to be as smart as them. The host James is really embarrassing himself. He's all like, "aw man, I would love to be silent for days," to which Gawdat says, "even 26 days is not enough." 🤦‍♂️ Cool, bro...so the podcast host wants to be silent more, and the orbital capitalist millionaire tells him that he should do more than 26 days of silence. Neat. Did Gawdat forgot that the system is organized in a fashion that most people can't take that much time off without getting hungry or cold? Or that the guy he's talking to is literally full of shit because his whole jam is to talk on videos for likes to make money? Gawdat continues, <bq>By day 32, clarity sets in.</bq> Sure, ok. 32 days without <iq>reading, inputting information, or interacting with people.</iq> is ... a lot. I feel like it's the kind of thing that people do who can't find balance otherwise, who can't figure out how to get silent moments integrated into their normal lives. He talks about sitting in front of a paper notebook without any digital input, etc. But it would kill me to sit that long. Instead, I would go for a walk or a hike. He does mention that he sometimes does "mini retreats" where he starts his day at 16:00. Sure, again, good for you. I don't think the 7--11 employee gets that many mental-health days. He talks about AIs "being smarter" than us and that AIs will be "a billion times smarter" than us "by 2037". What the hell does that even mean? I like that he doesn't even consider that he might be wrong about these levels of smartness. Like, where does context and wisdom enter into it? Like, what about useful intelligence? If you're capable of grasping incredible complexity, but you don't know a language that anyone else knows, then it's of limited use. I find these discussions interesting, but I don't know what that has to do with LLMs. It can get a PhD, it "outsmarts us", but it still doesn't know how many arms a person has. It can be convinced that 2 + 2 = 5. Don't we have to understand what this kind of "smart" actually means? In a way, there are already such beings in the world. They walk among us. They are the <i>smart people</i>. Most people don't grasp a goddamned thing about their world. Those who <i>do</i> grasp a lot---who are currently at the top of the heap---are they now terrified of being left behind? Of being like everyone else? Are we simply witnessing the panic of a self-selected intellectual elite being terrified that they've made themselves obsolete? Are they scared of things existing that they don't understand and can't understand? That's OK, no? There's a ton of stuff happening in countries where I don't know the language or the culture or anything. That's all out of my control already. There's no way I'll ever understand it. I wonder how much of what Mo's talking about is the terror of a control-freak? The attitude he has toward AI feels, to me, conceptually similar to the attitude that the U.S. has to anything it doesn't understand. Subjugate or eliminate. Maybe that's the right attitude to have for AI as well. It might be the right one because <i>this time it's different</i>---but, man, have I heard that story many times before. I suppose if you accept that premise of smartness---he still hasn't defined it more than vaguely---then you'd want to keep it from replacing us? Are we really talking about that? I think his comments in the other video were pithier---that it's not the <abbr title="Artificial Super Intelligence">ASI</abbr>s we should be afraid of, it's what people will do to us with them. I fall back on my comparison to the development of atomic power plants...and then atomic weapons. At <b>26:30</b>, he says, <bq>one of the best code developers on Earth today is AI. As a matter of fact, with weeks or months or years---it doesn't matter the time; it's inevitable, it doesn't matter when---they will be, by far, the best software developers on the planet.</bq> It kind of does matter when, no? Seriously, this guy elides so much stuff from his arguments. I wonder if he's thought it through and he just skips large portions or whether he's just ... full of shit. He just hand-waves away the temporal component. It doesn't matter when? Like, if they became better developers millennia from now, that would be the same so-called threat as if they were already the best software developers? C'mon, dude. He then cites another friend of his, the CEO of <i>Stability.AI</i>, who says that, <bq>40% of all code on GitHub today is written by a machine.</bq> First of all ... proof? Second of all ... are we just going to take a CEO of an AI company at their word that AI is taking over? Third of all, is Gawdat being sneaky when he says "machine"? There's already a ton of generated code, but it wasn't generated by an LLM. It was generated by tools that create boilerplate. And if it's 40%, is that good code? Or is volume the most important thing? We've spent decades trying to escape the <a href="https://en.wikipedia.org/wiki/Charybdis">Charybdis</a> of <abbr title="Lines of Code">LOC</abbr> and here we're pulled right back to measuring by size, not quality. I just want to note that James is insufferable. He offers no pushback at all. Nothing. Gawdat again: <bq>10 out of 10 of the most beautiful women in the world are not human. They're generated.</bq> C'mon, dude. You start off with this woo-ey meditation shit, but you think that a statement like that isn't philosophically fraught? Isn't beauty in the eye of the beholder? That people think an AI-generated person is beautiful ... doesn't that say more about the superficiality of our society than about a takeover of AI? There are so many better things to discuss than this angle. <bq>you have GPT being that you know geek boy nerd if you want or---and I say boy, sadly, not girl okay? Because, again, it's developed around IQ and there is a lot of emphasis on the masculine side of analytical thinking and so on and so forth, which is an unbalanced form of intelligence.</bq> There's a lot to unpack there. Analytical thinking is masculine? Well, well, well. This kind of attitude is, I suppose, the kind of thing that leads to the inherent bias of the machine that he's talking about, but I'm increasingly less likely to give him the benefit of the doubt that that's what he was trying to imply. I find it interesting that people like Gawdat discuss humans and people and what they would do, all without really speaking about how they actually tick. He says, <bq> I think when AI reaches that level of intelligence will become irrelevant to it. [...] No human wakes up in the morning and goes 'you know what? I'm so annoyed by ants I'm gonna kill every ant on the planet.' Nobody does that, okay? It's just [that] ants become irrelevant. They become relevant if they come into your space, so you may spray your balcony or whatever but no human comes up with that enormous plan of 'you know what? The world is bad until we get rid of all ants.' Nobody does that.</bq> Ok. like, you're ignoring a lot of history. People very definitely do that. It's called genocide. They don't always get every last one, but it's shocking to hear someone so admiring of their own intelligence not even think about Hitler or Suharto or Armenia or Native Americans. I wonder why he's so laser-like focused on potential problems while ignoring all of the very real ones that we have now. Like, he's worried about how we're going to interact with an AI that will be all-powerful and indifferent to us, right? But there are billions of people on the planet who already live exactly like that. Their lives are entirely influenced and completely controlled by the whims of an unseen and unknowable elite. It's hard not to see Gawdat's panic as being the reaction of someone who is in that elite and realizes that he may soon not be at the top of the heap anymore---as another alpha predator comes to town. Instead of recognizing the situation and trying to remedy his own role in it, he imagines a new layer and sounds the klaxon. AIs are going to destroy us all. Um, yeah, I guess, those of us that weren't already destroyed by capitalism? Like, capitalism's utter inability to do anything positive about climate change. Austerity. Intensifying animosity and dis-empathy between peoples. And I'm supposed to worry about SkyNet? I honestly feel like I'm listening to a blockchain huckster. The style is the same. At <b>31:30</b>, he starts talking about how <iq>the most valuable asset on the planet ... intelligence.</iq> I was just talking about this conceit with a guy I met in a bar yesterday (Matuš). The problem is that our society values the wrong things. The most intelligent people also consider themselves to be the most valuable. Yes, intelligence can be leveraged, but <i>everyone</i> is important. That intelligent person doesn't help anyone if they die of sepsis. They're not helping anyone if they don't have working plumbing. The discussion veers into relatively standard discussions of AI doomsaying. At <b>39:00</b>, <bq><b>Gawdat:</b> The only way we could reset is by resetting the entire Internet. <b>James:</b> Now, is that something that could ever happen? <b>Gawdat: </b> Never. I was sitting in silence the other day, and I wrote down three quadrants...</bq> <abbr title="Jesus Fucking Christ">JFC</abbr>. This is definitely the wrong interlocutor for Gawdat. Somebody needs to call him on his sweeping, bullshit statements. "Reset the Internet" "1 Billion Times Smarter". C'mon. This is kind of fun, but it's not a serious discussion, because only Gawdat is contributing to this discussion. He's now spending a ton of time explaining how people are selfish and incapable of working together above a clan level. Duh. Or that no-one can really say where the Internet actually is, or where it is. Interesting question, but he skips away quickly to talk about how awesome intelligence is. He just can't stop. <bq><b>Gawdat:</b> I tend to believe that abundance of intelligence normally uh you know is correlated to abundance of ethics. <b>James:</b> [nods vigorously]</bq> What? You've got to be kidding me. The relationship is nearly inversely proportional, with a few outliers. If you thought he wasn't going to double down on that statement, don't worry. <bq>So, [...] the dumbest of all of us would be destroying the planet [...] and causing global climate change without even being aware of it you know. The less dumb would be destroying the planet despite being aware of it. Then, the slightly smarter will attempt to stop destroying the planet because they're aware of it. The smarter still would attempt to fix the planet because they're aware of the damage right, and you continue that trajectory. The smartest of all will always be pro-life. I always say that human arrogance makes us think that we are the smartest human---smartest being---on the planet. That's not true at all. The smartest being on the planet is life itself.</bq> I mean, ok, I guess? This is just one of those statements where you can make out of it what you want, but the thrust is that intelligent people are the only ones possessed of sufficient ethics to save us. <a href="https://en.wikipedia.org/wiki/Philosopher_king">Plato lässt grüssen.</a> James just says <iq>I love that</iq> to everything, but Mo doesn't even notice that he's basically just talking to himself for 90 minutes. This didn't need to be an interview-format video, with two people. It's like 50% of the video screen is just a reaction video of James's goofy head. At <b>50:40</b>, James tries to ask a question, <bq><b>James:</b> What kind of control and ownership do we have as individuals, over the power of ... <b>Gawdat:</b> That's the most beautiful question of all.</bq> He didn't even let him finish asking the question! He instead shoots right back into talking about a book he wrote (<i>Scary Smart</i>, as he's done several times already). At about <b>53:00</b> or so, he launches into a discussion of ethics, absolutely confusing social mores with ethics by giving an example of a Brazilian girl in a G-String versus a more conservative girl in a Muslim society. They are both respected for doing the right thing in their society, I guess? Those are just cultural habits. I would have focused more on the underpinnings that led to those behaviors, like whether women have the same autonomy as men. But, yes, ethics is how societies resolve moral questions, like what is good, virtuous, evil, so I guess it fits. And he gets to say "G-String" and summon up the image of an ample, bronzed, Brazilian booty. This whole section is about bias, but he thinks we can control <iq>the ethical code of that machine.</iq> Which, if he's right, then it's already too late, no? The machines have been built with the "wrong" ethics. Then he hand-waves some stuff about how governments will have to build their own AIs to prevent AIs from being used for evil, then shoots right past that to give examples of how enough swipes on Instagram can help fix the ethics of an AI. Whooooooo. This guy doesn't know many people. Has he heard of Internet trolls? But then, but then, but then, he complains---for what feels like the fourth or fifth time---about people on his social-media accounts who are mean to him, when all he wants is to make billions of people happy. My cult-leader spidey-sense is going off to beat the band. And James is just nodding away like a dashboard bobblehead on a bumpy road, while the top comment on the video is <iq>[h]e is down to earth.</iq> What is happening!?! I think Gawdat could be so much of a better person if he didn't spend so much time interacting with idiots online. Then, maybe, he wouldn't have to make 40-day retreats to get right again. I see it in many other people I follow: otherwise intelligent people who end up making the broadest comparisons and most-shallow and incorrect arguments, just because that's how they've been taught to think by the kindergarten schoolyard that is online discourse. I was just listening to the Useful Idiots Podcast, with Aaron Maté and Katie Halper. I really like them. I think they're intelligent, witty, and have their ethics in the right place. But they drew several conclusions that were absolutely the correct ones, but justified them with completely specious reasoning. It's the kind of thing that makes you so assailable. You don't lock down your point because you made it in a way that someone who's looking to disagree with you, no matter what, is going to be able to use to continue the discussion long after it should have been shut down. I think that's my problem with Gawdat as well---his interactions have encouraged him to be lazy in his justifications for what I agree are the correct sentiments, which means I can't really use anything he says as ammunition. It's a pity. At <b>01:05:00</b>, he argues for the essential goodness of humanity, <bq>Are there more serial killers in the world or people who condemn killing?</bq> Sure, there are more pulses who are essentially good. Fine. Correct. But it's the assholes who seem to have the overwhelming share of power and influence. The essentially good don't have any influence. Jesus was wrong. The meek aren't really lined up to inherit shit. He touches on this as well, saying that the worst people are in politics, who get all the money, who are contributing the most information to the AIs. He says <iq>the best of us</iq> have <iq>a duty</iq> to take part. Sigh. Who's the best of us? Which ethics? Implicit in his line of reasoning is that there is such a thing as "good ethics", else with what would you align an AI? How would you select the "right" people for politics and training AIs? Plato's philosopher kings all over again. <bq>You can't succeed by being good. And it's the most important time in human history to be good.</bq> He dances around the topic of how the system is utterly broken---perhaps because it's how he even got to a position where he has more money than any human needs and everyone wants to know what he has to say. When James asks him whether anyone can just ignore AI, Gawdat cuts him off again, saying <iq>you will die in two or three years.</iq> Wait, what? Then he clarifies, <bq>As a business. It's as if you were trying to hang onto the fax machine in the age of the Internet.</bq> I'm sure everyone's getting tired of me picking Mo's nits, but he really, really elides so much in his analysis of "the world." He uses "the world" as shorthand for "all of the 1%-ers he knows in Silicon Valley will have to adopt AI or their businesses will die." Most of the world doesn't have use cases for AI, but he doesn't think of them---or he's deluded into thinking that they do have use cases somehow---or that they can be convinced to have them. He whipsaws back and forth between talking about his extraordinary empathy for his fellow man---and his utter inability to understand that the things that make humanity worth preserving have nothing to do with electronic mediation---or with the coming AI mediation of interaction. He speaks very quickly, but I get the distinct feeling that he's very wide, but not very deep. He is what passes for deep in his circles. But he doesn't really know any hoi polloi. Nor does he see that as necessary. They're the "dumb" ones "without ethics." He's slicker, but this is the basic line of reasoning of Hillary Clinton and her ilk. He values intelligence above all else. Nothing else even comes close. That's not how the world works. Everything is important. Intelligence can be leveraged. But intelligence doesn't fix the indoor plumbing. He sounds kind of naive, but I think his spiel is also perfect for telling billionaires exactly what they want to hear. Hell, they could be getting worse advice, don't get me wrong, but his advice is so suffused with that hustler mentality---<iq>whatever job you're going to choose, choose the job where you're going to be in the top two of people [who] can do that job</iq>---all while he won't shut up about silence and retreats and mediation and spiritualism. Really? The TOP TWO? Like, does that mean you shouldn't work at McDonald's? Who are you talking to, man? Like just your circle of self-selected .... philosopher kings. And every idiot in his cult will think "he's talking right to me!" Then he corrects himself to say <iq>2 out of 10</iq>. <iq>Whatever you do, choose a job that you're very good at.</iq> James: <iq>That's powerful</iq> Christ almighty, James, you're terrible. Meanwhile, 90% of the world is just looking at Mo, going, "choose" a job? Luxury! At <b>1:20:30</b>, he says. <bq>Steve Jobs was successful because he had an empathy for the user's needs, an appreciation of beauty, and enormous creativity---that actually are all feminine qualities.</bq> There he goes again, with his masculine and feminine qualities. Am I missing something? Is this not junk science? How does he get away with this kind of talk in his circles? At <b>1:23:00</b>, James says <iq>I want to ask one last question.</iq> Dude, did you even get in a <i>first</i> question? I've just been watching your nodding head in the left-hand-side panel like you'd been generated by NVidia's AI. In fairness, I liked part of Mo's answer, describing what he thinks "purpose" is. <bq>I think the definition of purpose as per the Western society is very much commoditized---it's almost like a target. It's like, I set a target in the future. I spend the next eight years pursuing it, feeling frustrated and upset that I haven't achieved it and then. when I achieve it, I have one of two choices. Either to set another target and feel upset for the next eight to nine years or to feel empty and feel that I'm purposeless. That's a very misleading view of purpose honestly. It's a very misleading view of the game of life in general. Because the only point in life that you have access to is right now. The Eastern philosophies will tell you: no, how can you set your life around the future, centric moment when life is here and now? How can you do that? The only way you can actually live life is to live here and now and so the definition of purpose becomes very different.</bq> Why would he think people would "hate him" for that? Ah, because he knows his audience is full of high-optimizing tech bros who are interested in appearing deep, but are really interested in money, and funding, and retiring. <bq>The purpose of life is to become the best you can be at something that you want to be and that makes life better for others. If you define life's purpose this way, it becomes so easy. Because you know what the one thing that a writer can do to achieve that purpose? It's to write. Even if what you write is discarded, the purpose is not the book that I'm writing. The purpose is to write. That way of looking at life is very different than the Western way and I think that way of looking at life---'I want to become the best at whatever it is that I can do'---that is the right way to live with purpose.</bq> He keeps talking from the viewpoint of something who's achieved a lot and who is very intelligent, constantly making the assumption that everyone else can achieve like him. Or, if not, not addressing the reality that most people who achieve the best that they can be at something are not going to be able to support themselves in the world we have. The world we have doesn't support this type of purpose for more than 5% of the people. We should have such a world, but we don't. Yet. If I were in James's place, I would have pumped him much more for ideas about how he thinks we can get there from here. How can we make the person who cleans toilets feel like they're valued, like they're living their best life? I'm not kidding. This is the problem you would need to solve. It's a shame that James just yes-manned his way through the interview because I feel that there's much more there---or maybe we would find out that there isn't. The other interview I saw with Mo Gawdat was very much in the same style. At the end, Mo says <iq>this was a wonderful conversation. At least for me, I felt it was really connected and deep.</iq> AHAHAHA. He spoke for 99% of the time. He was talking to himself, pretty much. He's not lying. He had a great time. James gave himself whiplash nodding for 90 minutes. 😉 <hr> <ft>I am aware of the irony of my writing this, as writing about Gawdat---this video was recommended to me by a friend who said "he reminds me a bit of you"---makes me wonder how much I'm describing how other people view me. I read a lot and, granted, sometimes I skip a lot of steps. I have had the benefit of having much better interlocutors than James. They call me on my bullshit more than Mo gets called on his.</ft>