Artificial Intelligence (AI)

Software that performs human-like functions is generally regarded as artificial intelligence, until it is so commonplace that no one is impressed any more that something more complicated becomes the next target of AI. There have been several AI winters – where early success in one area failed to develop into wider progress. But recently, narrow AIs have become superhuman in many games including chess, backgammon, scrabble; and more recently Jeopardy! and Go.

I'll leave the philosophical debate of can a machine really be conscious, or is its intelligence really only the output of the human written software, to the academics, but what is definitely coming soon is machines far more powerful than a human brain. Think how many times more powerful a modern day PC is compared to one 20 years ago - eventually that will be the difference between human level intelligence and a super-AI-intelligence. It won't be the difference between someone of average intelligence and Einstein, it will be like a mouse trying to understand quantum physics. We will be the new mouse and won't be able to understand the thoughts and capabilities of this superintelligence.

And we will have no choice but to give those machines control of our lives, for example it won't be humanly possibly to control the increasing air/rail/road traffic so those quick decisions will have to be made by some sort of artificial intelligence. Why it's a question of survival is because we don't know if, Terminator style, the machine will suddenly decide that its existence is more important that its day job.

Or as Nick Bostrom explains in his excellent book, Superintelligence, it may not even be a malevolent AI - but one that tries too hard to please as (its creator) and uses up all the resources on the planet in doing so. Alternatively, if we didn't clearly define the goal for the AI, then it may solve world hunger by killing most of the humans on the planet meaning there was plenty of food to go around.

This is one area that I did disagree with Ray Kurzweil in his book The Singularity Is Near - he assumes that any human created higher intelligence would treat us as a curiosity and care for us. As this isn't how we treat lower animals to us (which we cull, abuse and eat) I don't think we can be sure that we would be treated any differently.

Resources

Benefits and Risks of Artificial Intelligence - Future of Life Institute (FLI)
Including the top myths about advanced AI

Videos

Slaughterbots - Future of Life Institute
Dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogrammed criteria.

Artificial Intelligence (AI) Blog Posts

Superintelligence by Nick Bostrom - Book Review
12-Nov-2019 Review of Nick Bostrom's book Superintelligence on the risks of artificial intelligence

Homo Deus - review and quotes
29-Jul-2018

Kurzweil v Hawking - good AI v bad AI
04-Dec-2014

Recent News

James Lovelock predicts the Anthropocene may quickly be replaced by the Novacene

NBC News - 25-Aug-2019

He doesn't think humans will merge with technology (Kurzweil style)

Read more...

UN reports massive growth in AI patents

United Nations - 31-Jan-2019

No mention of artificial general intelligence (AGI) which will be the real revolution

Read more...

Is effective regulation of AI possible?

Institute for Ethics and Emerging Technologies (IEET) - 27-Sep-2018

An author summarizes the possible challenges and complications that arise when trying to regulate artificial intelligences

Read more...
More Artificial Intelligence (AI) News