Artificial Intelligence/Bayesian Decision Theory

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Bayesian Decision Theory[edit | edit source]

Problem[edit | edit source]

Imagine you have been recruited by a supermarket to do a survey of types of customers entering into their supermarket to identify their preferences, like what kind of products they buy. Let us assume the supermarket is particularly interested in identifying the differences in buying habits between men and women. The plan is to assigned "male" or "female" at the billing counter using an automated system.

Features[edit | edit source]

Based on existing research data you determine for the first run to test against these easily identifiable properties:

  1. Women are usually shorter than men
  2. Men usually have shorter hair than women

The first parameter is relatively easy to test against; the second is more difficult due to the problems in identifying hair reliably from an image. Therefore you begin your test using just the first criterion to determine how good the results are.

Loopy Belief Propagation[edit | edit source]

One way that a Bayesian network could potentially “hallucinate,” or, more appropriately, have a delusion, is through a looping of probability. For example, if something looks vaguely like a toilet, then it might increase the probability of the scene being a bathroom. But the activation of the “bathroom” node might then increase the probability of “toilet.” Over time the probabilities could get high, resulting in belief confidence out of proportion to the true levels justified by sensory data and prior knowledge. This is called “loopy belief propagation,” The way to solve this is to allow information from node J to propagate to node K, except that information that K has already sent to J. This prevents these loops from happening, and this solution can be implemented using inhibition.Loopy belief propagation has actually been suggested as what is going on in delusional human patients.[1],

  1. Jardri, R. & Denève, S. (2013). Computational models of hallucinations. In R. Jardri, A. Cachia, P. Thomas, & P. Delphine (Eds.) The neuroscience of hallucinations. (pp. 289--313). New York, NY: Springer.