After all, if we can think of a countermeasure, so can the SAI. Bostrom also thinks the various countermeasures we might use, such as stunting the SAI, or “boxing” it in by confining it to a walled garden remote from the rest of the world’s computers, are all unlikely to work. This could lead to what Bostrom calls the “treacherous turn” by an SAI, which is likely to take us by surprise, and happen far too late for us to do anything about it. Furthermore, it is likely to be better than we are at strategic thinking, and thus at getting what it wants. No matter what form an SAI might take – and Bostrom describes a few likely alternatives – and no matter what its values might be (even if its penchant happens to be to calculate pi out to a googolplex decimal points), it is likely to have interests that rub up against ours – such as a desire for resources or energy. This is because superintelligence is serious business. If we are greeted by a superintelligent artificial intelligence (SAI) within this century, as Oxford University philosopher Nick Bostrom suggests is highly likely, we may look upon this book either as an alarm that helped avert disaster or as a prescient insight into the makings of our doom. Indeed, perhaps even the most significant book of your lifetime. Superintelligence might be the most important book you’ll read this year. Superintelligence: Paths, Dangers, Strategies
0 Comments
Leave a Reply. |