Kurzweil v Hawking - good AI v bad AI
When I read Ray Kurzweil's Singularity is Near, which was the seed for wanting to live forever, the only thing I disagreed with Ray about was his assumption that the inevitable super intelligent AI (due to the exponential growth in technology) would be benevolent and care for humans like pets or a scientific curiosity.
It looks like Prof Stephen Hawking's view is nearer mine (BBC News - Hawking: AI could end human race) that it would treat us like vermin or with total disregard for our welfare - as most animals on this planet probably experience the human race. Worst case, it's Matrix-style and somehow we turn into food for our own creation.
And how would we stop it? Let's assume we have created a new super-powerful AI using neural network chips or quantum computers - would we even know that it had gained self-awareness? While we're still studing it and running TUring tests, if it decides to be bad it would easily circumvent any barriers we had put in place to prevent it spreading - just think how many flaws hackers have discovered in apparently secure websites and then imagine what a super-mega-uber-geek could find. How quickly could it take control of every computer and network router? Then give it a few more hours of processing to figure out how to take over production plants to create its own real-world compatriates - robot armies are already in production, one day its likely we won't be worrying about who has the biggest army but what?
“Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us.”
― H.G. Wells, The War of the Worlds
Mentioned in this blog post:
Tap on icon for description, click on resource name for more details.
Too much of a good thing? More than a glass of milk increases chance of death
Eat or recharge? Soylent doesn’t go far enough for me