Artificial Intelligence/AI Agents and their Environments

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Particular artificial intelligence programs, or AIs, can be thought of as intelligent "agents" that interact with particular environments. In general, intelligent agents of all types (including rats, people, as well as AI programs) interact with their environments in two main ways: perception and action.

For purposes of AI, perception is the process of transforming something from the environment into internal representations (memories, beliefs, etc.). Action done when the agent, by doing something, changes the environment.

For example, if a robot uses its camera to determine that there is a wall in front of it, then it is using perception. In this example, the camera is a "sensor." In general, sensors are what agents use to get things from the environment to do perception. Human sensors include eyes, ears, and the nose. AIs can have sensors of many types, including ones analogous to human perception, but also including some that humans do not have, such as sonar, infrared, GPS signals, etc.

If that robot uses its body to push the wall over, then it is doing action. The body, in this case, is an "actuator". If the robot uses wheels to back up, this is also an example of doing action. The environment has changed because the robot is now in a different place. Typical robot actuators include arms, wheels, lights, or speakers.


What about an AI that has no body? Let's take, for example, a music recommendation system. In this case, the environment for the agent might be the web page that the user clicks and types into, or perhaps a database of user preferences. The sensors, in this case, are abstract functions in the agent's programming, rather than pieces of hardware, as they are on a robot. The function that detects when a user clicks, or the function that fetches information from the database might be considered sensors for this agent. What would the actuators be? When the agent displays recommended music to the user, they are using actuators to display things on the screen--or to serve the web page. Another example is the database that the music recommendation system uses--should it be considered a part of the environment that the agent interacts with, or a part of the agent's memory? As you can see from this example, what is considered an actuator and what is considered part of the environment depends on what you consider to be the boundaries of the agent. There is not always a right or wrong boundary. It depends on the context of the discussion.

Environment Classification System

In spite of the difficulty of knowing exactly where the environment ends and the agent begins in some cases, it is useful to be able to classify AI environments because it can predict how difficult the task of the AI will be. Russell and Norvig (2009) introduce seven ways to classify AI environments, which can be remembered with the mnemonic "D-SOAKED." They are:

  • Deterministicness (deterministic or stochastic or Non-deterministic): An environment is deterministic if the next state is perfectly predictable given knowledge of the previous state and the agent's action.
  • Staticness (static or dynamic): Static environments do not change while the agent deliberates.
  • Observability (full or partial): A fully observable environment is one in which the agent has access to all information in the environment relevant to its task.
  • Agency (single or multiple): If there is at least one other agent in the environment, it is a multi-agent environment. Other agents might be apathetic, cooperative, or competitive.
  • Knowledge (known or unknown): An environment is considered to be "known" if the agent understands the laws that govern the environment's behavior. For example, in chess, the agent would know that when a piece is "taken" it is removed from the game. On a street, the agent might know that when it rains, the streets get slippery.
  • Episodicness (episodic or sequential): Sequential environments require memory of past actions to determine the next best action. Episodic environments are a series of one-shot actions, and only the current (or recent) percept is relevant. An AI that looks at radiology images to determine if there is a sickness is an example of an episodic environment. One image has nothing to do with the next.
  • Discreteness (discrete or continuous): A discrete environment has fixed locations or time intervals. A continuous environment could be measured quantitatively to any level of precision.

In each case, the job of the AI (and for the programmer making the AI) is easier if the first of the two options is the best descriptor for each category. That is, an AI has a much more difficult job if it works in an environment that is stochastic, dynamic, partially observable, multi-agent, unknown, sequential, and continuous.


High-Level Descriptions of AI Agents

AIs are made in a variety of ways. When communicating about a new AI, perhaps in a scholarly paper, or perhaps at a business meeting, there are a few questions that can be asked of just about any AI. These can be remembered with the mnemonic PEAS.

  • Performance Metrics: How does the AI know it's doing what it's supposed to be doing?
  • Environment: What environment does the agent interact with?
  • Actuators: How does the AI affect its environment?
  • Sensors: How does the AI get information from its environment?

Important vocabulary:

  • perception
  • action
  • sensor
  • actuator
  • environment
  • performance metric
  • deterministic
  • stochastic
  • fully observable
  • partially observable
  • dynamic environment
  • static environment
  • multi-agent environment
  • known and unknown environments
  • Episodic environment
  • Sequential environment
  • discrete vs. continuous environments

Discussion Question:

  • Using the classification schemes described above (PEAS, D-SOAKED), describe the following AI agents: a self-driving car, a face-recognition system at an airport, a scheduling system for a trucking business, R2D2.

References:

Russell, S. & Norvig, S. (2009). Artificial Intelligence: A Modern Approach. Third Edition. Prentice Hall.