Artificial Intelligence/The Singularity

From Wikibooks, open books for an open world
Jump to navigation Jump to search

The history of AI has had some embarrassingly optimistic predictions, particularly in the early years. In short, AI researchers severely underestimated the difficulty of some of the problems. Though there was success with designing programs that could play chess, it turned out that recognizing the chess pieces in video was much more difficult.

Futurist Ray Kurzweil continues to publish optimistic predictions. He has popularized the term "singularity" as it applies to AI (though the term was coined by Vernor Vinge for this purpose.) The singularity is the point in time when Artificial Intelligence can automatically improve on itself faster than humans where previously able to. The reason it's called the singularity is because it is very difficult to know what will happen afterward, since the future will then depend on the decisions of beings more intelligent than we are.

Kurzweil's predictions are based on a number of observations about the exponential growth in certain fields, such as nanotechnology, computational power, genetic analysis, and accuracy of brain scanning. Very basically, his argument is as follows: Brain scanning technology is getting better at an exponential rate. Therefore, soon we will be able to scan entire brains at the level of detail necessary to understand everything, physically, we need to know to create a simulation of a brain in software. The exponential growth of computational power will allow future computers to be able to process all of this data. Having a brain in software will allow us to rapidly test and understand how intelligence works in human beings (as well as other animals.) It will then be a short time before we can improve on it.

Kurzweil predicts that the singularity will occur at approximately 2041.