Log in

Previous Entry | Next Entry

Those of you who attend conferences in artificial intelligence and artificial neural networks may recall the perennial futurist talks by such notables as Karl Pribram (who's presented technical papers at IJCNN many times, including in 1996) and David Stork, well-known "optimistic" futurists such as Ray Kurzweil and Stephen Thaler - and also the famous kooks, who are too numerous to name.

So, how are we doing? What is the state of the technological singularity?

I have to ask because noted critics such as Sir Roger Penrose and Hubert Dreyfus tend not to speak at AI conferences, leaving only the "sunny siders" with a voice. You'll often see interviews with Hans Moravec or Eric Drexler, or visionaries such as Vint Cerf in print, but they'll often read like brochures from the Jetsons.

Ray Kurweil, for example, spoke at AAAI 2002. He propounded his characteristic notion that when Moore's Law gets us up to a number of switching elements that is comparable to the number of synapses in the human brain (about 1014 for between 1010 and 1011 neurons), magic will happen. More on this specific claim later.

ETA, 10:50 CST Sun 23 Nov 2008: Here's another example, an excerpt from Computerworld's interview with Ray Kurzweil on 11 Nov 2007.

Q: How will hardware technologies evolve over the next 10 years?

A: If you go out 10 years, computers are not going to be these rectangular objects we carry around. They're going to be extremely tiny. They're going to be everywhere. There's going to be pervasive computing. It's going to be embedded in the environment, in our clothing. It's going to be self-organizing.

We're going to solve this dilemma we have now with displays. On the one hand, people like 50-inch screens, and they'll spend thousands of dollars on them. On the other hand, they like watching movies on a 1- or 2-inch screen, but that's really not a satisfactory experience. We are going to solve that by putting the displays in our glasses, which will beam images to our retinas. This will create very high-resolution virtual displays that can hover in the air. And it can also completely overtake your visual field of view in three dimensions, creating full-immersion visual/auditory virtual reality.

We'll also have augmented real reality. The computers will be watching what you watch, listening to what you're saying, and they'll be helping. So if you look at someone, little pop-ups will appear in your field of view, reminding you of who that is, giving you information about them, reminding you that it's their birthday next Tuesday. If you look at buildings, it will give you information, it will help you walk around. If it hears you stumbling over some information that you can't quite think of, it will just pop up without you having to ask.

Well, it's true we're working all this, but just as Ramanujan Syndrome makes ambitious researchers-to-be clap their hands over their ears and say, "Don't tell me! I want to figure this out for myself!".

I think it's great to be optimistic, but you also have to lay bare before the public what the challenges are. I actually don't think it helps to elide mention of our hangups and mental blocks. You're not going to help Mozart emerge by not having Salieri and his ilk to teach him. That's just Ramanujan Syndrome in my book - the romance of the self-taught. Not everyone needs formal education, but good education never stifled anybody. So tell AI neophytes about our foibles, our little obsessions with games and killer apps, our open problems. Just don't suggest that the open problems are insurmountable. We're waiting for our grand unification theory (GUT) - heck, we're waiting for our theory of relativity - but it will come for us as it has for other fields.

ETA, 12:50 CST Sun 23 Nov 2008: It occurs to me that I've never asked my uncle about his thoughts on the technological singularity since around 1986 or 1987, when he was still a Ph.D. student at CMU. All I remember from our conversations in those days was that he wanted to know if I was a dualist or a pure physicalist. (I was, and am, "mostly physicalist"; some aspects of property dualism appeal to me, but to me some of the points are undecidably fine distinctions.) If you like, I'll get an exclusive the next time I see him. :-)

A note on the subtitle of this post, immanentizing the eschaton: it means "trying to make that which belongs to the afterlife happen here and now (on Earth)" or "trying to create heaven here on Earth". See the wibblings of yahvah and other folks interested in the Revelation of John for the biblical context. The phrase itself was coined by the late Eric Voegelin in The New Science of Politics in 1952 and popularized as a political catchphrase by William F. Buckley in the 1950s and 1960s. I first saw it in 1990 in the .project file of a graduate classmate, Jack Eifrig, and thought he was just trying to end the world.



( 2 comments — Leave a comment )
Nov. 23rd, 2008 09:06 pm (UTC)
Of course it has a basis in traditional religion, but I think the concept of "immanentizing the eschaton" was more recently popularized by the Illuminatus! trilogy. Today I mostly see it in such a context. (Especially when it's a computer-related discussion...)

Just so that you know that people might be reading Illuminatus! references into your writing ;] I don't know if that's the sort of thing you were aiming for. (You could be -- I don't know!)
Nov. 24th, 2008 07:05 pm (UTC)
Illuminatus! and godhead
That works fine, too, since the technological singularity is, by its very forward-thinking and hopeful nature, ostensibly "sufficiently advanced technology as to be instinguishable from magic" (or godhead, if you like).

Sometimes I wonder why I have this icon, bearing the Chinese characters for "faith", as my "faith" icon... and sometimes I don't.

( 2 comments — Leave a comment )

Latest Month

December 2008

KSU Genetic and Evolutionary Computation (GEC) Lab



Science, Technology, Engineering, Math (STEM) Communities

Fresh Pages


Page Summary

Powered by LiveJournal.com
Designed by Naoto Kishi