blogiversary– return to the future

Standard

nova
(Image found on http://www.udel.edu/biology/Wags/histopage/wagnerart/worldspage/worlds.html)

October 9th will be the six-year anniversary of dangblog. I’ll observe this occasion with a return to one of my first posts “Goodbye Monkey Body.” In that post I was a little disdainful of the “transhumanist” movement. Here’s a summary of how I saw it then, and my views haven’t changed very much:

Rapid technological progress (especially in computer processing, AI, and nanotechnology) = magic.

And don’t drag out the old Arthur Clarke quote about advanced technology, magic, etc. No, the transhumanist idea is that accelerating progress will bring a “singularity event” in which ever-advancing technology will result in a quick revolution in which everything in society changes and maybe it will be bad but then again maybe we’ll live forever through uploading our minds or transforming our bodies. Let’s face it. The bottom line for many of those involved is, “I’m going to live for thousands of years if not forever. Praise the sacred event that’s coming and holy science. Hallelujah.”

The folk religion of the 20th and 21st century is UFO beings and angels from the Pleiades, and the intellectual religion is the singularity and science as savior. Science fiction books and Kurzweil tomes are the holy scripture.

I recently heard Michael Vassar, President of the Singularity Institute for Artificial Intelligence, on the Skeptics Guide to the Universe podcast. I consider myself something of a geek, but he is awesomely geeky; possibly a geek of a higher order, a High Geekique. He spoke of a “recursively self-improving” entity. That is, an intelligence of some sort that can rewire or rearrange itself to become more skilled, and more intelligent. Something that understands the workings of intelligence so well that it can make itself smarter. Given the speed at which we are deciphering the workings of the mind, he can’t imagine not learning how to do that in the next century.

Michael’s mission is to make sure that humans create the first self-improving intelligence, and that we do so thoughtfully. We should do it in such a way that the new intelligence doesn’t have the desire or means to damage or destroy humanity. Step up and be a responsible god maker, Mr. Human. For me, it’s just a little much when people start talking about “soft take-off” vs. “hard take-off” regarding how quickly the ever-spiraling upward evolution of “super mind” will inevitably transform everything.

On the other hand, I find the whole idea stimulating, and even intoxicating. I love good science fiction and I love good science (as long as I can understand it). But is this scenario likely? That’s where my skepticism steps in. A part of me is a science-loving salivating nerd boy who wants this to be true, and the other part is totally skeptical of all the born-again, “We’re all going to live forever if we hang on long enough, so take lots of vitamins!” bullcrap.

Summary: It’s probably a good thing that someone is taking it seriously just in case events truly go in this direction. Maybe thanks to them there will be safeguards in place. Personally, I’m not too worried about it. Yet.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s