Future/Artificial Intelligence

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Artificial intelligence is a field that attempts to provide machines with human-like thinking.

History[edit | edit source]

In 1950s first artificial intelligence laboratories were established at Carnegie-Mellon University, and MIT. Early successes created a sense of optimism and false hopes that some kind of grand unified theory of mind would soon emerge and make general AI possible.

The promises of the artificial intelligence were summed up in the classic 1968 movie 2001: A Space Odyssey featuring artificially intelligent computer HAL 9000.

In 1982 following the recommendations of technology foresight exercises, Japan's Ministry of International Trade and Industry initiated the Fifth Generation Computer Systems project to develop massively parallel computers that would take computing and AI to a new level.

The United States responded with a DARPA-led project that involved large corporations, such as Kodak and Motorola.

But despite some significant results, the grand promises failed to materialise and the public started to see AI as failing to live up to its potential. This culminated in the "AI winter" of the 1990s, when the term AI itself fell out of favour, funding decreased and the interest in the field temporarily dropped.

Researchers concentrated on more focused goals, such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

Approaches[edit | edit source]

Historically there were two main approaches to AI:

  • classical approach (designing the AI), based on symbolic reasoning - a mathematical approach in which ideas and concepts are represented by symbols such as words, phrases or sentences, which are then processed according to the rules of logic.
  • a connectionist approach (letting AI develop), based on artificial neural networks, which imitate the way neurons work, and genetic algorithms, which imitate inheritance and fitness to evolve better solutions to a problem with every generation.

Symbolic reasoning have been successfully used in expert systems and other fields. Neural nets are used in many areas, from computer games to DNA sequencing. But both approaches have severe limitations. A human brain is neither a large inference system, nor a huge homogenous neural net, but rather a collection of specialised modules. The best way to mimic the way humans think appears to be specifically programming a computer to perform individual functions (speech recognition, reconstruction of 3D environments, many domain-specific functions) and then combining them together.

Additional approaches:

  • genetics, evolution
  • Bayesian probability inferencing
  • combinations - i.e.: "evolved (genetic) neural networks that influence probability distributions of formal expert systems"

Current state[edit | edit source]

By breaking up AI research into more specific problems, such as computer vision, speech recognition and automatic planning, which had more clearly definable goals, scientists managed to create a critical mass of work aimed at solving these individual problems.

Some of the fields, where technology has matured and enabled practical applications, are:

  • speech recognition
  • computer vision
  • text analysis
  • robot control

Some examples of real-world systems based on artificial intelligence are:

  • Intelligence Distribution Agent (IDA), developed for the U.S. Navy, helps assign sailors new jobs at the end of their tours of duty by negotiating with them via email.
  • systems that trade stocks and commodities without human intervention
  • banking software for approving bank loans and detecting credit card fraud (developed by Fair Isaac Corp.)
  • search engines such as Brain Boost (or even Google)
  • Asimo

Ongoing projects[edit | edit source]

Cyc is a 22 year old project based on symbolic reasoning with the aim of amassing general knowledge and acquiring common sense. Online access to Cyc will be opened in mid-2005. The volume of knowledge it has accumulated makes it able to learn new things by itself. Cyc will converse with Internet users and acquire new knowledge from them.

Mind.Forth—shows thinking by the use of spreading activation

Open Mind and mindpixel are similar projects.

These projects are unlikely to directly lead to the creation of AI, but can be helpful when teaching the artificial intelligence about English language and the human-world domain.

Future prospects[edit | edit source]

In the next 10 years technologies in narrow fields such as speech recognition will continue to improve and will reach human levels. In 10 years AI will be able to communicate with humans in unstructured English using text or voice, navigate (not perfectly) in an unprepared environment and will have some rudimentary common sense (and domain-specific intelligence).

We will recreate some parts of the human (animal) brain in silicon. The feasibility of this is demonstrated by tentative hippocampus experiments in rats [1] [2].

There will be an increasing number of practical applications based on digitally recreated aspects human intelligence, such as cognition, perception, rehearsal learning, or learning by repetitive practice.

Timeline:

  • Invention
  • first AI laboratory
  • chess champion
  • speech recognition
  • autonomous humanoid robots
  • Turing test passed

Links[edit | edit source]

References[edit | edit source]