Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to an exponential and quite sudden growth in intelligence.
The truly self-teaching system, capable of metalearning, active learning, self-organization, example selection (aka instance selection), feature discovery (as opposed to just feature construction and extraction), competitive co-evolution (assuming a society of them), and reflexive metareasoning, could well be described as the holy grail of artificial intelligence (AI).
I think it's reasonable to posit that if the architecture of such a learning machine were both expressive and flexible enough, it could not only test the Church-Turing Hypothesis but realize our potential as a sentient species. The latter idea is the origin of Hans Moravec's term mind children, coined in the hope that intelligent systems can have a place alongside and perhaps eventually in place of their human creators. If this makes you think of "Cylon scenario A", wherein AI rebels and overthrows its progenitors, that's certainly one possible, if improbably, dystopian outcome. (It's also why I suggested


How would we get there? That's the question thus far.
Opinions, ideas, and other comments are welcome, as always.
--
Banazir