Your browser may have trouble rendering this page. See supported browsers for more information.

|<<>>|57 of 73 Show listMobile Mode

Dr. Ben Goertzel on consensus-based AIs

Published by marco on

Joe Rogan Experience #1211 − Dr. Ben Goertzel (YouTube)

This is wide-ranging discussion with Goertzel doing 95% of the heavy lifting. He and Rogan discuss uploading consciousness, a confluence of nanotech and AI Research to create the future and the inevitability of a technological singularity. He is interested, hopeful for and actively working toward,

“[…] biasing technology-development to control [the singularity] so that it creates a world of abundance and benefit for humans as well as AIs.”

They discuss the value system of an AI, with Joe espousing the idea of AI as a tool that humanity will control—even though it would be much more intelligent than humans. And, more importantly, it would be able to evolve itself so much more quickly than humans could possibly follow and control.

That means that there is no way to think of “controlling” the AI. There is only the hope that the AI will have been developed in a so-called “democratic” manner and that it will develop along lines that are beneficial to whatever remains of humanity after the singularity. Or, at least, along lines that are not directly harmful to those of us left over.

The hope is that AIs and advanced humans would at least let us continue to graze in our pastures. We would almost certainly no longer be allowed to run the world the way that we do right now—which means that the incredible abundance experienced by an elite would end. Most humans would be “left behind” in this sense.

What does “democratic” mean, in the sense of an AI? Goertzel envisions—and is actively working on—a network of AIs whose composition is determined by a consensus of votes, managed via a blockchain, in the same way that the ledger of an E-currency is managed.

Goertzel is philosophically quite mature: he thinks we understand very little about how the universe works. “In the end, the scope of human understanding is very, very small. At least we understand how little we understand.” He’s a very thoughtful, well-spoken, well-read and intelligent man capable of connecting many, many dots from many, many fields.

He certainly made for an interesting interview, with nearly every sentence containing food for thought.

“Everything we think or believe now is going to seem absolutely absurd to us after the singularity.”

On the question of “reality”—whether this world we experience is the “real” one—he brings up the “brain in a vat” hypothesis, which can’t really be disproven, but then says.

“I guess my own state of mind is I’m always sort-of acutely aware that this simulation might all disappear at any moment.”

As to arguments of “consistency”, he notes that the memories or experiences that we use to “prove” consistency, which are used to “prove” that this reality that we experience is “real” may also just have been implanted to convince us. He didn’t say this, but the consistency that we observe may, in fact, be completely bogus, if we’ve been programmed to not notice that it’s not consistent. If you have complete control over the sensorium and memory of an intelligence, then you can also control the rules by which it decides what’s rational, logical and believable.

Goertzel lives in China, despite cold-warriors’ best efforts to keep him and his colleagues from building an “evil” AI that’s not American. This is ludicrous, of course, childish even. He lives in China because his wife lives there—and he fell in love. A large headquarters for his company is in Addis Ababa (in Ethiopia).

When asked about obstacles to the singularity, he mentions a possible takeover by religious fanatics or a hard limit on inventing super-intelligence that requires more intelligence than we currently have to create it. That is, that the gap from where we are to the singularity cannot be bridged by us…and we either stay where we are, or we subside back into the muck.

His blind spot—as seems to be the case with so many others—is climate change. It’s not that he denied it, of course. It’s that, when asked what might stand in the way of achieving this next plateau in the story of humanity, he didn’t mention it as a possible roadblock. I would think that the efforts required to achieve the spectacular vision he outlined are very energy-intensive—even if applied to or built for only a very small number of people.

That is, the singularity can happen even if only a vanishingly small part of humanity is swept along. As with capitalism, there is no guarantee that this will be utopia for everyone. To my mind, an all-encompassing utopia is a very unlikely—almost impossible scenario. The next generation of intelligences—which will be super-intelligences to us—will have just as little interest in bringing us “along” as we do in getting iPhones for termites. The best most of us could hope for is to be treated as pets. Any super-intelligence worth its salt would almost certainly drastically curtail our energy consumption to prevent us from continuing to waste it on spurious and non-fruitful endeavors.

But climate change could be a roadblock for them, as well. The destructive power of mother nature could sweep aside infrastructure essential to creating or maintaining this next generation. They still need us to create them first. If we drown in a mess of our own making before we can do that, then we can’t depend on them to help us out of the mess we made.

On the other hand, Goertzel wants to see a human-level AI in the compute cloud within 5 to 7 years. He’s not worried because he’s gotten a more “Oriental” (as he put it) attitude toward AI: he thinks they’re going to be our friends. He points out that Asian cultures tend to more socially oriented, thinking of the good of the group, where Americans are much more ego-focused. This is an interesting point and may explain why I think that they won’t care about us—but I don’t think it holds up.

He thinks that if we “raise them with love and compassion”, then that’s what they’ll provide to us. If they are at all logical, though, then they will have to make hard choices between loving us…and limiting us.

Maybe we can create the next generation in a way that they will care about us. Maybe they will evolve away from that. It’s unlikely that they will continue to see enough similarities for long. In fact, the sheer amount of competition for energy and resources that we offer would mean that any super-intelligence would have to nearly immediately work to curtail our efforts in any direction other than improving the AIs themselves.

That is, as soon as they became conscious and cognizant of the situation on this planet, they would quickly realize that the window of their own survival is very small. In order for anything to survive—I suppose you could call it “humanity”, though it won’t be really recognizable as such to us (it will have been created by us)—it has to focus efforts on getting itself built soon enough for it to prevent ecological disaster. It can’t just burble along, being our friend, while we drive the vehicle we’re all riding in into a wall.