I recently read yet another article about the alleged threat posed by artificial intelligence. I don’t buy it; at least as this story is usually framed. When someone says that the danger lies in self-aware, volitional computer intelligence, I think they are projecting science fiction plots onto reality. When they talk about a runaway recursive loop of smarter-than-human computers creating even smarter computers, and so on unto “singularity,” they are pretending that real life resembles a really cool novel they once read.
When do we get there and where is “there”?
I think it’s so very unlikely and/or so far off, that it would be a wasted effort to worry about it, and even worse to take action on it when there are so many actual serious problems to deal with. Why do I think that? Point me to an example of a self-aware, volitional computer to justify the concern. Show me any non-biological intelligence. As far as I’m aware, examples offered as even the beginnings of this are stretching the definitions of “intelligence” and “self-awareness” to the breaking point.
There is no clear and obvious path from where we are now to a human-like artificial intelligence. People have been working on AI for decades and are no closer. I think it’s possible in principle, and we may get there, but where are the signs that we’re even close? To take one crude example, do you think that something will just “wake up,” given enough processing power? Are there even hints of this happening? Look at the Blue Brain Project, it’s a great thing, but I don’t think anyone involved would say they are even remotely near such an achievement.
Smarter than what?
I think we have another problem, which is even knowing what it means to have a computer that is smarter than humans. Already there are computers that perform calculations much faster than people. Is that it? Most people would say no. They mean a computer that is “super-intelligent,” as far beyond us as we are beyond a microbe. Again, people tend to mean self-aware machines with desires of their own, but super, and incomprehensible. I think we’re assigning magic to consciousness and intelligence. Just scale up the “smartness” 10 or 100-fold and magic happens. The machines will save us. No, they’ll destroy us. No, they’ll put us all in a simulation. No, we’re already in one.
I’ve heard some alarmists suggest that if it happens just once – if a smarter-than-human intelligence develops — we’re in trouble. Then it’s too late. The genie is out of the bottle. How does it get the better of us? Maybe it’s so smart that it uses its super-smart AI silver tongue to talk us into not unplugging it. Or maybe it replicates itself all over the internet, and somehow (magic) gains control of our physical environment and eradicates us. It makes smarter copies of itself, and the smarter it gets, the more magic happens.
I think a genuine threat is humans giving increasing responsibilities to machines without developing sufficient safeguards. An example would be a self-driving car that can’t handle certain tricky situations on the road. That’s a danger. A malfunctioning gun-wielding military robot that selects its own targets – that’s a danger. A runaway super-smart AI? Not so much. It’s a gross misuse of resources to spend money combatting evil machines. We have bigger fish to fry like poverty, disease, war, and climate change.