Log in

No account? Create an account

Previous Entry | Next Entry

The Wikipedia article about the technological singularity attributes the seminal idea to Irving John Good (b. 1916), a renowned statistician who taught at Virginia Tech:

Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to an exponential and quite sudden growth in intelligence.

The truly self-teaching system, capable of metalearning, active learning, self-organization, example selection (aka instance selection), feature discovery (as opposed to just feature construction and extraction), competitive co-evolution (assuming a society of them), and reflexive metareasoning, could well be described as the holy grail of artificial intelligence (AI).

I think it's reasonable to posit that if the architecture of such a learning machine were both expressive and flexible enough, it could not only test the Church-Turing Hypothesis but realize our potential as a sentient species. The latter idea is the origin of Hans Moravec's term mind children, coined in the hope that intelligent systems can have a place alongside and perhaps eventually in place of their human creators. If this makes you think of "Cylon scenario A", wherein AI rebels and overthrows its progenitors, that's certainly one possible, if improbably, dystopian outcome. (It's also why I suggested loving_the_ai when someone made lovingthealiens, and they went ahead and created it.)

How would we get there? That's the question thus far.

Opinions, ideas, and other comments are welcome, as always.



( 1 comment — Leave a comment )
Nov. 29th, 2008 02:57 am (UTC)
I suspect it makes a picture closer to where we're heading if you imagine substantial augmentation coexisting with the creation of independent intelligences. By the time an independent computer is anywhere near human, many or most humans will be far beyond that point themselves, cooperating intimately with millions of semi-intelligent agents. Those "Powers" (as I call them) will not feel threatened by either human-level AI nor whatever unaugmented humans remain. Normal humans as seen from the Powers' perspective will have long since slowed to below a molasses crawl, to the point where they seem like slowly tilting statues.
( 1 comment — Leave a comment )

Latest Month

December 2008

KSU Genetic and Evolutionary Computation (GEC) Lab



Science, Technology, Engineering, Math (STEM) Communities

Fresh Pages


Page Summary

Powered by LiveJournal.com
Designed by Naoto Kishi