Foundations of Computer Science/Artificial Intelligence

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Artificial Intelligence[edit | edit source]

What is A.I.?[edit | edit source]

Artificial Intelligence (AI) is the idea of building a system that can mimic human intelligence. How we determine intelligence is based on how people plan, learn, natural language processing, motion and manipulation, perception, and creativity. These various areas are used in the process of engineering and developing Artificial Intelligence.

The Turing Test & A.I.[edit | edit source]

One concept that is important to note is that computers only perform algorithms and programs given, they cannot inherently create algorithms on their own. We have seen in previous chapters where algorithms can morph or change, but not without being given an algorithm.

Previous chapters have breached the topic of the Turing Test. The Turing Test is used as a theoretical standard to determine whether a human judge can distinguish via a conversation with one machine and one human which is a human and which is a machine. If a machine can trick the human judge into thinking it is human then it passes the Turing Test.

Although there are several innovative developments in the field of AI there are still areas that are being tested in improved. Once such example can be found with CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans apart). The CAPTCHAs are used to distinguish between human and computer. Even today, computers are unable to identify images such as those generated by CAPTCHAs as well as humans. Those developing CAPTCHAs are using these as a tool to teach computers to recognize and learn words and/or images that humans are able to identify. CAPTCHA is considered a reverse Turing test because a computer is determining whether a human user is indeed human.

Intelligent Agent Approach[edit | edit source]

The intelligent agent approach was first introduced in the quest for "artificial flight". The Wright brothers and others stopped their attempts to imitate birds and to instead embrace the idea of aerodynamics. The goal is not to duplicate, but to use what is known about flying and manipulate that knowledge.

A major part of this approach is a rational agent, which is an an agent used to achieve the best outcome. Therein lies the connection to the intelligence test of rationality. Basically, can this machine or computer mimic the rational behavior of a human.

History of A.I.[edit | edit source]

The concept of AI began around 1943 and became a field of study in 1956 at Dartmouth. AI is not limited to the Computer Sciences disciplines, but can be seen in countless disciplines such as Mathematics, Philosophy, Economics, Neuroscience, psychology and various other areas. The areas of interest in the Computer Science and Engineering field are focused on how we can build more efficient computers. Great advancements have been made in the area of hardware and software.

A.I. Knowledge-based Expert System[edit | edit source]

An A.I. system often times will use a rule-based system to capture knowledge in the form of if-then statements. Another way to think about these rule-based systems is as decision trees. Decision trees use these preset rules to determine the decision path to follow based on input provided. An example of a decision tree or rule-based system is a single player game. In the game the player imagines an animal (real or imaginary) and answers a series of questions, which are designed for the computer to guess what the animal is assuming the player always answers the questions truthfully.

Machine Learning[edit | edit source]

There are two types of machine learning: formal and informal. The informal involves giving computers the ability to learn without explicitly programming the capability (Arthur Samuel, 1959). The formal type of machine learning is a computer program that learns from experience in respect to some task and increases performance based on that experience (Tom Mitchell, 1998).

Supervised Learning[edit | edit source]

The supervised learning is based on giving the correct answers and having the computer mapping inputs to outputs. Examples of supervised machine learning are:

  • United States Postal Service using computers to read zip codes on envelopes and automatically sort mail (i.e. handwritten zip codes)
  • Spam filters - software is trained to learn and distinguish between spam and non-spam messages (i.e. email filters)
  • Facial recognition- used by cameras to focus and via photo editing software to tag persons (i.e. Facebook)

Unsupervised Learning[edit | edit source]

Unsupervised learning is simply the reverse of supervised learning where the correct answers are unknown. The goal or objective of unsupervised learning is to discover structure in data, which is also known as data mining. The computer looks at data to find trends to make and/or assist in decision making. Below are examples of unsupervised learning:

  • Clustering algorithm - Used to find patterns in datasets and then group that data into different coherent clusters.
  • Market segmentation - targeting customers based off regions, likes, dislikes, when the consumer makes purchases, etc. This is considered targeted marketing.
  • Recommendation systems - systems or software that make recommendations to the consumer as to what they may like based off their preferences (i.e. Netflix, Hulu, etc.).
  • Statistical Natural Language Processing - used to guess the next word or auto-complete words based off of correcting/guessing techniques, suggesting news stories, or translating texts)

Genetic Programming[edit | edit source]

Genetic Programming is an idea that uses evolutionary process to improve algorithms.

Future of A.I.[edit | edit source]

There are many challenges in mimicking human intelligence. Humans acquire common senses that are intuitive but hard to reason rationally, e.g. the color of a blue car is blue. Deep learning is a branch A.I. that aim to create algorithms that can acquire intuition.