super brain is out to get you

Standard

I recently read yet another article about the alleged threat posed by artificial intelligence. I don’t buy it; at least as this story is usually framed. When someone says that the danger lies in self-aware, volitional computer intelligence, I think they are projecting science fiction plots onto reality. When they talk about a runaway recursive loop of smarter-than-human computers creating even smarter computers, and so on unto “singularity,” they are pretending that real life resembles a really cool novel they once read.

When do we get there and where is “there”?

I think it’s so very unlikely and/or so far off, that it would be a wasted effort to worry about it, and even worse to take action on it when there are so many actual serious problems to deal with. Why do I think that? Point me to an example of a self-aware, volitional computer to justify the concern. Show me any non-biological intelligence. As far as I’m aware, examples offered as even the beginnings of this are stretching the definitions of “intelligence” and “self-awareness” to the breaking point.

There is no clear and obvious path from where we are now to a human-like artificial intelligence. People have been working on AI for decades and are no closer. I think it’s possible in principle, and we may get there, but where are the signs that we’re even close? To take one crude example, do you think that something will just “wake up,” given enough processing power? Are there even hints of this happening? Look at the Blue Brain Project, it’s a great thing, but I don’t think anyone involved would say they are even remotely near such an achievement.

Smarter than what?

I think we have another problem, which is even knowing what it means to have a computer that is smarter than humans. Already there are computers that perform calculations much faster than people. Is that it? Most people would say no. They mean a computer that is “super-intelligent,”  as far beyond us as we are beyond a microbe. Again, people tend to mean self-aware machines with desires of their own, but super, and incomprehensible. I think we’re assigning magic to consciousness and intelligence. Just scale up the “smartness” 10 or 100-fold and magic happens. The machines will save us. No, they’ll destroy us. No, they’ll put us all in a simulation. No, we’re already in one.

I’ve heard some alarmists suggest that if it happens just once – if a smarter-than-human intelligence develops — we’re in trouble. Then it’s too late. The genie is  out of the bottle. How does it get the better of us? Maybe it’s so smart that it uses its super-smart AI silver tongue to talk us into not unplugging it. Or maybe it replicates itself all over the internet, and somehow (magic) gains control of our physical environment and eradicates us. It makes smarter copies of itself, and the smarter it gets, the more magic happens.

Get real

I think a genuine threat is humans giving increasing responsibilities to machines without developing sufficient safeguards. An example would be a self-driving car that can’t handle certain tricky situations on the road. That’s a danger. A malfunctioning gun-wielding military robot that selects its own targets – that’s a danger.  A runaway super-smart AI? Not so much. It’s a gross misuse of resources to spend money combatting evil machines. We have bigger fish to fry like poverty, disease, war, and climate change.

perfect cult recipe

Standard

Haven’t posted for quite a while, so in desperation I’m putting up this long, somewhat dry message. One night I was wandering from site to site, and came across a link to this interview transcript from 2010. Though it wasn’t intended to do so, the interview lays out a distilled essence of cult.

For some background; the site this interview comes from is lesswrong.com, and many of the people who post there are believers in the probability of a technological singularity. By one common definition, the singularity is when artificial intelligence is created and it rapidly leapfrogs human intelligence and becomes unimaginably smart and powerful. That belief is the basis for all that comes after. The interviewee is Eliezer Yudkowsky, a research fellow at an organization he helped to create, the Machine Intelligence Research Institute (MIRI), and a big promoter of the singularity concept.

Step one in the development of a cult group is a shared belief in something that inspires people. In this case, it’s the assumed probability of the singularity. Step two is elevated importance; the assumption that the most important project in the world is developing this super-AI and making sure that it does not damage or destroy humans. Step three is a consequence of step two – the assumption that giving money to this project is the most useful and important thing you can do with your money. Here’s one of the interview questions.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

The SIAI is the Singularity Institute for Artificial Intelligence (now MIRI, I believe), and Methuselah is a project to extend human lifespan. Yudkowsky’s response:

So, first, why restrict it to intelligent people in today’s world?  Why not everyone?  And second… the reply to the essential intent of the question is yes, with a number of little details added.

In other words, he agrees with the questioner. There are only two “legitimate” occupations. He adds some fine points about how much of one’s earnings might realistically be donated. A bit later, he goes on to really amp up the urgency.

This is crunch time for the entire human species.

… and it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us.

This is starting to sound like a science fiction dream — saving the world from unfriendly AI, enabling our intergalactic future. Yes, but if you read a lot of science fiction, and if you’re looking for something cool to believe this might get your blood going.  If you go beyond that and believe that Yudkowsky’s organization is supremely important, and that Yudkowsky himself is supremely important, this is step four, having a great and wise leader.

Someone asks, what would happen if you, Yudkowsky, were no longer on the scene? Who would carry on? Part of his response:

And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them.

Only one man can save the world, and it’s me! And now you have finished the making of a cult. If you wonder if there is really a potential cult of personality built around this person, another question for him is:

Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs ….?

We have the most important cause in the world, in all of history, present and future. Therefore give most of your spare money to this cause. By the way, I’m the only guy who can do it. It could be L. Ron Hubbard talking, or almost any other cult leader, but I’ve never seen the belief, the money, and the “guru” personality encapsulated so neatly in one interview and tied with a bow.

Singularitans love graphs like this:

singularity1

 

Singularitans with a good sense of humor love graphs like this:

singularity2

blogiversary– return to the future

Standard

nova
(Image found on http://www.udel.edu/biology/Wags/histopage/wagnerart/worldspage/worlds.html)

October 9th will be the six-year anniversary of dangblog. I’ll observe this occasion with a return to one of my first posts “Goodbye Monkey Body.” In that post I was a little disdainful of the “transhumanist” movement. Here’s a summary of how I saw it then, and my views haven’t changed very much:

Rapid technological progress (especially in computer processing, AI, and nanotechnology) = magic.

And don’t drag out the old Arthur Clarke quote about advanced technology, magic, etc. No, the transhumanist idea is that accelerating progress will bring a “singularity event” in which ever-advancing technology will result in a quick revolution in which everything in society changes and maybe it will be bad but then again maybe we’ll live forever through uploading our minds or transforming our bodies. Let’s face it. The bottom line for many of those involved is, “I’m going to live for thousands of years if not forever. Praise the sacred event that’s coming and holy science. Hallelujah.”

The folk religion of the 20th and 21st century is UFO beings and angels from the Pleiades, and the intellectual religion is the singularity and science as savior. Science fiction books and Kurzweil tomes are the holy scripture.

I recently heard Michael Vassar, President of the Singularity Institute for Artificial Intelligence, on the Skeptics Guide to the Universe podcast. I consider myself something of a geek, but he is awesomely geeky; possibly a geek of a higher order, a High Geekique. He spoke of a “recursively self-improving” entity. That is, an intelligence of some sort that can rewire or rearrange itself to become more skilled, and more intelligent. Something that understands the workings of intelligence so well that it can make itself smarter. Given the speed at which we are deciphering the workings of the mind, he can’t imagine not learning how to do that in the next century.

Michael’s mission is to make sure that humans create the first self-improving intelligence, and that we do so thoughtfully. We should do it in such a way that the new intelligence doesn’t have the desire or means to damage or destroy humanity. Step up and be a responsible god maker, Mr. Human. For me, it’s just a little much when people start talking about “soft take-off” vs. “hard take-off” regarding how quickly the ever-spiraling upward evolution of “super mind” will inevitably transform everything.

On the other hand, I find the whole idea stimulating, and even intoxicating. I love good science fiction and I love good science (as long as I can understand it). But is this scenario likely? That’s where my skepticism steps in. A part of me is a science-loving salivating nerd boy who wants this to be true, and the other part is totally skeptical of all the born-again, “We’re all going to live forever if we hang on long enough, so take lots of vitamins!” bullcrap.

Summary: It’s probably a good thing that someone is taking it seriously just in case events truly go in this direction. Maybe thanks to them there will be safeguards in place. Personally, I’m not too worried about it. Yet.