super brain is out to get you


I recently read yet another article about the alleged threat posed by artificial intelligence. I don’t buy it; at least as this story is usually framed. When someone says that the danger lies in self-aware, volitional computer intelligence, I think they are projecting science fiction plots onto reality. When they talk about a runaway recursive loop of smarter-than-human computers creating even smarter computers, and so on unto “singularity,” they are pretending that real life resembles a really cool novel they once read.

When do we get there and where is “there”?

I think it’s so very unlikely and/or so far off, that it would be a wasted effort to worry about it, and even worse to take action on it when there are so many actual serious problems to deal with. Why do I think that? Point me to an example of a self-aware, volitional computer to justify the concern. Show me any non-biological intelligence. As far as I’m aware, examples offered as even the beginnings of this are stretching the definitions of “intelligence” and “self-awareness” to the breaking point.

There is no clear and obvious path from where we are now to a human-like artificial intelligence. People have been working on AI for decades and are no closer. I think it’s possible in principle, and we may get there, but where are the signs that we’re even close? To take one crude example, do you think that something will just “wake up,” given enough processing power? Are there even hints of this happening? Look at the Blue Brain Project, it’s a great thing, but I don’t think anyone involved would say they are even remotely near such an achievement.

Smarter than what?

I think we have another problem, which is even knowing what it means to have a computer that is smarter than humans. Already there are computers that perform calculations much faster than people. Is that it? Most people would say no. They mean a computer that is “super-intelligent,”  as far beyond us as we are beyond a microbe. Again, people tend to mean self-aware machines with desires of their own, but super, and incomprehensible. I think we’re assigning magic to consciousness and intelligence. Just scale up the “smartness” 10 or 100-fold and magic happens. The machines will save us. No, they’ll destroy us. No, they’ll put us all in a simulation. No, we’re already in one.

I’ve heard some alarmists suggest that if it happens just once – if a smarter-than-human intelligence develops — we’re in trouble. Then it’s too late. The genie is  out of the bottle. How does it get the better of us? Maybe it’s so smart that it uses its super-smart AI silver tongue to talk us into not unplugging it. Or maybe it replicates itself all over the internet, and somehow (magic) gains control of our physical environment and eradicates us. It makes smarter copies of itself, and the smarter it gets, the more magic happens.

Get real

I think a genuine threat is humans giving increasing responsibilities to machines without developing sufficient safeguards. An example would be a self-driving car that can’t handle certain tricky situations on the road. That’s a danger. A malfunctioning gun-wielding military robot that selects its own targets – that’s a danger.  A runaway super-smart AI? Not so much. It’s a gross misuse of resources to spend money combatting evil machines. We have bigger fish to fry like poverty, disease, war, and climate change.

ask the universe


The universe – an incomprehensibly large expanse strewn with billions of galaxies, each full of billions of stars. And it’s expanding. Then toss in dark energy and dark matter, even though no one currently knows what that means. Add gravity and stir. 

Maybe, like me, you have heard someone ask the universe for help. They’ll say something like, “I want to live in a tropical paradise, so I’ll put the intention out there and see if the universe supports me.” However, I’m pretty sure that the actual universe doesn’t care if you live or die, let alone where you live. Go ahead, just ask it if it cares. 

Yet some people believe that an invisible force underlying all reality will “support” them if their wish is in cosmic alignment with … universe energy! Or something vague like that. Years ago, I held a similar belief – everything is made of consciousness, so you can do anything. It’s kind of like “the secret,” which says that if you really want something you’ll get it. After all, you create reality, so the world will rearrange itself according to your desires. 

In other words, it’s magic. Some people will make ridiculous claims that quantum physics somehow supports this belief in magic. It doesn’t. Go ahead, ask a quantum physicist at the nearest university physics department. I’ll wait. 

These beliefs are not different than the old “praying for what you want” gambit. If your prayer (intention) comes true, the lord (universe) has granted your desire, and if it doesn’t come true, it wasn’t just part of the deity’s plan (energies not in alignment). 

Ask the universe to send you a bag of dog chow. Tell the universe you want to meet the love of your life. Ask it to resurrect a dead tulip. Best of all, ask it for something really vague, such as to make everything work out according a plan you can’t know about. That’s the universe’s specialty.

And if the actions of the universe are indistinguishable from random chance, well, maybe that’s how it prefers to operate.


the oracle cards come nearer


As I mentioned in a previous post, I’m working on a deck of super-magical, beyond-belief cardboard slabs with words and pictures on them. Please note that I didn’t say “magickal” because I’m not an Aleister Crowley devotee, a yahoo, or spell-check deprived.

Anyway, I’ll repeat myself and say that these cards will put the “woo!” in woo-woo and the “sigh” in science. Today I had a graphic designer poke her nose into them and give me a little advice.

The major challenge left is to decide whether to go beyond just divination and also have a game that can be played with these cards. That’s a challenge. How do you create a card game? There are faint threads of ideas, but I don’t have the mechanics. I asked a game developer for advice and was given the brush off. He’ll regret it when I sell as many as 10 of these soon-to-be-highly-sought-after miracle items.

Did I mention that handling them will cure diaper rash, borborygmi, and the willies?