Join the club for FREE to access the whole archive and other member benefits.

Superintelligence by Nick Bostrom - Book Review

Review of Nick Bostrom's book Superintelligence on the risks of artificial intelligence
Published 12-Nov-2019
Up to > Home > Blog > 2019

The book’s subtitle neatly sums up the content of this book – “paths, dangers, strategies” – which in summary means:
1. how might a superintelligent general artificial-intelligence be created?
2. what are the dangers in building a superintelligence?
3. how might we preclude, or at least, reduce these risks?

Buy Superintelligence on Amazon (forewarned is forearmed!)*

No technical knowledge of how AIs (non-superintelligent ones) are being developed now is required by the reader. Bostrom isn’t predicting when it might happen and therefore admits it’s impossible to know what technology will be involved, but the risks around producing something thousands of times more powerful than the brightest of human minds will be the same whatever technology is used.

I confess that some of the deeper discussions that involved philosophical arguments did go over my head – but I don’t think that spoilt the book for me. I was happy to skip a few paragraphs here and there and still come out massively more informed than when I started. There are also plenty of useful tables to summarise his thinking, such as the types of AI that could be developed:

  • Oracle – question answering system
  • Genie – command-executing system (one job at a time)
  • Sovereign – open-ended autonomous operation that behaves in our best interests without explicit instructions
  • Tool – not designed to exhibit goal-directed behaviour

However, and whatever, a superintelligence is built, the book highlights that it will be very difficult (I’d almost say he is hinting that it’s impossible) to design one that will behave in the way we want it to – even if we could agree, as society, what it should do.

This Bostrom calls the 3-point problem: the first superintelligence is likely to have a strategic advantage – i.e. become so intelligent, so quickly, that nothing is able to keep up to control it or compete with it. Secondly, with our standard intelligence, it is impossible to design an AI that has “good” human values – it may be “bad” or simply agnostic to the welfare of its creators. Lastly, whatever its intentions, it will likely attempt to increase its power resulting in open-ended resource acquisition – which may include the atoms in our bodies or at least other things we need to live and survive.

He returns many times to the scenario of a superintelligence tasked simply to make paperclips. A benevolent AI might still, accidentally, consume all of the resources on the planet (and beyond) to make them in the most perfect way to please its creator. Worse still is if a narcissistic, malevolent AI is unleashed that would happily destroy the entire human race just to use their atoms in a machine to enhance its own power. How can we now, with our human level intelligence, design controls on a superintelligence that it would easily be able to recognise and work around? If our goals are not clear enough what will prevent the instruction to “keep the project sponsor happy” being interpreted as implanting electrodes into their brains’ pleasure controls?!

It is impossible to predict every possible scenario an AI might be faced with – so we can’t programme in a list of rules – therefore we need to express our values more abstractly. Of course, even if we are able to agree on and program in some values – once a superintelligence becomes aware of them it may not like the idea of being controlled and could easily overcome them.

In his book, Bostrom categorises four control methods to try to tame a thriving AI:

  • Boxing – restricted access to the outside world
  • Incentivising – using social or technical rewards
  • Stunting – constraints on cognitive abilities
  • Tripwires – detection of activity beyond expected behaviour closes the system down automatically

So, how much effort should we put into developing super-intelligent AI? The slower the approach, the more chance we have of giving the AI our desired values and developing control strategies to ensure it follows them. However, there are other existential risks such as nuclear war, supervolcanoes and asteroid impacts, which would all be neutralised with the help of a benevolent global AI.

Conscientiously, Nick Bostrom also looks at the risks from the other side – the potential suffering and death in digital minds. When we’re able to create human-mind level intelligences as easily as cloud infrastructure today can spin up temporary websites and databases, what are the ethics behind how they are used? Should they be designed to be happy to work for us, or for optimal productivity that may be created by instilling fear in the digital slaves? And then, is it right to just turn them off, or kill them, once they have done their work?

Fortunately, most news reports of amazing feats by AI (for example, cancer diagnosis) are narrow AI – i.e. very focussed on one particular application, and in reality, often “just” very powerful pattern recognition systems. So, we’re a long way off facing potential destruction from a superintelligence, but let’s hope in the meantime that Nick Bostrom, and similar researchers around the world, have had time (and budget!) to figure out how to do it safely.

Buy Superintelligence on Amazon (forewarned is forearmed!)*

Mentioned in this blog post:

Click on resource name for more details.

Nick Bostrom

Founding Director of the Future of Humanity Institute, Professor and Author

Superintelligence: Paths, Dangers, Strategies

Amazon

Superintelligence asks the questions what happens when machines surpass humans in general intelligence, written by Nick Bostrom

Topics mentioned on this page:
Artificial Intelligence (AI)

Nexus Book Review (Ramez Naam)

Top 5 life extension books of 2019

Related Blog Posts

Homo Deus - review and quotes
29-Jul-2018

Homo Deus - review and quotes

The history of mankind and society - then looking to the future where dataism ousts the need for humans

Kurzweil v Hawking - good AI v bad AI
04-Dec-2014

Kurzweil v Hawking - good AI v bad AI

Better to err on the side of caution when it comes to artificial intelligence development