Artificial Intelligence/Search/Iterative Improvement/Hill Climbing

From Wikibooks, open books for an open world
< Artificial Intelligence‎ | Search
Jump to: navigation, search

Hill-Climbing as an optimization technique[edit]

Hill climbing is an optimization technique for solving computationally hard problems. It is best used in problems with “the property that the state description itself contains all the information needed for a solution” (Russell & Norvig, 2003).[1] The algorithm is memory efficient since it does not maintain a search tree: It looks only at the current state and immediate future states.

Hill climbing attempts to iteratively improve the current state by means of an evaluation function. “Consider all the [possible] states laid out on the surface of a landscape. The height of any point on the landscape corresponds to the evaluation function of the state at that point” (Russell & Norvig, 2003).[1]

In contrast with other iterative improvement algorithms, hill-climbing always attempts to make changes that improve the current state. In other words, hill-climbing can only advance if there is a higher point in the adjacent landscape.

Iterative Improvement and Hill-Climbing[edit]

The main problem that hill climbing can encounter is that of local maxima. This occurs when the algorithm stops making progress towards an optimal solution; mainly due to the lack of immediate improvement in adjacent states.

Local maxima can be avoided by a variety of methods: Simulated annealing tackles this issue by allowing some steps to be taken which decrease the immediate optimality of the current state. Algorithms such as simulated annealing “can sometimes make changes that make things worse, at least temporarily” (Russell & Norvig, 2003).[1] This allows for the avoidance of dead ends in the search path.

Random-Restart Hill-Climbing[edit]

Another way of solving the local maxima problem involves repeated explorations of the problem space. “Random-restart hill-climbing conducts a series of hill-climbing searches from randomly generated initial states, running each until it halts or makes no discernible progress” (Russell & Norvig, 2003).[1] This enables comparison of many optimization trials, and finding a most optimal solution thus becomes a question of using sufficient iterations on the data.

Algorithm in Pseudocode[edit]

function HILL-CLIMBING(problem) returns a solution state
         inputs: problem, a problem
         static: current, a node
                 next, a node
         current <— MAKE-NODE(INITIAL-STATE[problem])
         loop do
                 next— a highest-valued successor of current
                 if VALUE[next] < VALUE[current] then return current
                 current *—next
         end

(Russell & Norvig, 2003)[1]

Computational Complexity[edit]

Since the evaluation used looks only at the current state, hill-climbing does not suffer from computational space issues. The source of its computational complexity arises from the time required to explore the problem space.

Random-restart hill-climbing can arrive at optimal solutions within polynomial time for most problem spaces. However, for some NP-complete problems, the numbers of local maxima can be the cause of exponential computational time. To address these problems some researchers have looked at using probability theory and local sampling to direct the restarting of hill-climbing algorithms. (Cohen, Greiner, & Schuurmans, 1994).[2]

Applications[edit]

Hill climbing can be applied to any problem where the current state allows for an accurate evaluation function. For example, the travelling salesman problem, the eight-queens problem, circuit design, and a variety of other real-world problems. Hill Climbing has been used in inductive learning models. One such example is PALO, a probabilistic hill climbing system which models inductive and speed-up learning. Some applications of this system have been fit into “explanation-based learning systems”, and “utility analysis” models. (Cohen, Greiner, & Schuurmans, 1994).[2]

Hill Climbing has also been used in robotics to manage multiple-robot teams. One such example is the Parish algorithm, which allows for scalable and efficient coordination in multi-robot systems. The group of researchers designed “a team of robots [that] must coordinate their actions so as to guarantee location of a skilled evader.” (Gerkey, Thrun, & Gordon, 2005).[3]

Their algorithm allows robots to choose whether to work alone or in teams by using hill-climbing. Robots executing Parish are therefore “collectively hill-climbing according to local progress gradients, but stochastically make lateral or downward moves to help the system escape from local maxima.” (Gerkey, Thrun, & Gordon, 2005).[3]

References[edit]

  1. a b c d e Russell, S. J., & Norvig, P. (2004). Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall.
  2. a b Cohen, W., Greiner, R., Schuurmans, D. (1994). Probabilistic Hill-Climbing. Computational Learning Theory and Natural Learning Systems, Vol. II. Edited by Hanson, S., Petsche, T., Rivest R., Kearns M. MITCogNet, Boston:1994.
  3. a b Gerkey, B.P., Thrun, S., Gordon, G. (2005). Parallel Stochastic Hillclimbing with Small Teams. Multi-Robot Systems: From Swarms to Intelligent Automata, Volume III, 65–77.