We Need to Stop Killer Robots from Taking Over the World
Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.
In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.
As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.
Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.
The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).
Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.