1 AStar Search Algorithm
1.1 Heuristic Function
1.1.1 Comparing AStar to BFS
1.1.2 Sample Behavior
1.1.3 Modifying Open States
1.1.4 Demonstration
1.2 AStar videos
1.3 Version : 2015/ 12/ 15

CS 2223 Dec 14 2015

Lecture Path: 26
Back Next

Expected reading: pp. 668-683
Daily Exercise:

If you do not change direction, you may end up where you are heading.
Lao Tzu

1 AStar Search Algorithm

We have seen a number of blind searches over graph structures. In their own way, each one stores the active state of the search to make decisions.

These algorithms all make the following assumptions:

By relaxing these three assumptions we can introduce a different approach entirely that takes advantage of these fundamental mechanics of searching while adding a new twist.

In single-player solitaire games, a player starts from an initial state and makes a number of moves with the intention of reaching a known goal state. We can model this game as a graph, where each vertex represents a state of the game, and an edge exists between two vertices if you can make a move from one state to the other. In many solitaire games, moves are reversible and this would lead to modeling the game using undirected graphs. In some games (i.e., card solitaire), moves cannot be reversed and so these games would be modeled with directed graphs.

Consider the ever-present 8-puzzle in which 8 tiles are placed in a 3x3 grid with one empty space. A tile can be moved into the empty space if it neighbors that space either horizontally or vertically.

The following image demonstrate a sample exploration of the 8-puzzle game from an initial state.

The goal is to achieve a clocwise ordering of tiles with 1 being placed in the upper left corner and the middle square of the grid being empty. Given this graph representation of a game state, how can we design a search algorithm to intelligently search from a starting point to the known goal destination state?

In the field of Artificial Intelligence (AI) this problem is known as Path Finding. The strategy is quite similar to the DFS and BFS searches that we have seen, but you can now take advantage of a special oracle to select the direction that the search will expand.

First start with some preliminary observations:

In the initial investigations into AI playing games from the 1950s, the following two types are strategies to use:

While it is still difficult to develop accurate heurstic scoring functions, this is still an easier task than trying to understand game mechanics.

So we will pursue the development of a heuristic function. To explain how it can be used, consider the following template for a search function:

# states that are still worth exploring; states that are done. open = new collection closed = new collection add start to open while (open is not empty) { select a state n from open for each valid move at that state generate next state if closed contains next { // do something } if open contains next { // do something } else { add next state to open } }

In a Depth-First search of a graph, the open collection can be a stack, and the state removed from open is selected using pop.

The above image shows a max-Depth depth first search which stops searching after a fixed distance. Without this check, it is possible that a DFS will consume vast amounts of resources while darting here and there over a very large graph (on the order of millions of nodes).

In a Breadth-First search of a graph, the open collection is a queue, and the state removed from open is selected using dequeue.

The following represents a BFS on an eight-puzzle search:

As you can see, this methodically investigates all board states K moves away before investigating states that are K+1 moves away.

Neither blind approach seems useful on real games.

Wouldn’t it be great to remove a state from open that is closes to the goal state? This can be done if we have a heuristic function that estimates the number of moves needed to reach the goal state.

1.1 Heuristic Function

The goal is to evaluate the state of a game and determine how many moves it is from the goal state. This is more of an art form than science, and this represents the real intelligence behind AI-playing games.

For example, in the 8-puzzle, how can you identify the number of moves from the goal state? You can review the Good Evaluator which makes its determination by counting the number of misplaced tiles while also taking into account the sequence of existing tiles in the board state.

The goal is to compute a number. The smaller the number is, the closer you are to the goal state. Ideally, this function should evaluate to zero when you are on the goal state.

With this in mind, we can now compute a proper evaluation function.

AStar search computes the following function for each board:

f(n) = g(n) + h(n)

Here g(n) is the current depth in the exploration from the start game state while h(n) represents the scoring heuristic function that represents the number of moves until the goal state is reached.

1.1.1 Comparing AStar to BFS

If the Heuristic function always returns 0, then AStar search devolves into BFS, since it will always choose to explore states that are K moves away before investigating states that are K+1 moves away.

It is imperative the the heuristic function doesn’t overestimate the distance to the goal state. If it does, then the AStar search will mistakenly select other states to explore as being more productive.

This concept is captured by the term admissible heuristic function. Such a function never overestimates, thougbh it may underestimate. Naturally, the more accurate the heuristic function, the more productive the search will be.

1.1.2 Sample Behavior

The behavior of AStar is distinctive, as shown below:

1.1.3 Modifying Open States

One thing that is common to both BFS and DFS is that vertices in the graph were marked and they were never considered again. We need a more flexible arrangement. Specifically, if we revisit a state which is currently within the open collection, though it hasn’t yet been selected, it may be the case that a different path (or sequence of moves) has reduced the overall score of g(n) + h(n). For this reason, AStar Search is represented completely using the following algorithm:

public Solution search(EightPuzzleNode initial, EightPuzzleNode goal) { OpenStates open = new OpenStates(); EightPuzzleNode copy = initial.copy(); scoringFunction.score(copy); open.insert(copy); // states we have already visited. SeparateChainingHashST<EightPuzzleNode, EightPuzzleNode> closed; while (!open.isEmpty()) { // Remove node with smallest evaluated score EightPuzzleNode best = open.getMinimum(); // Return if goal state reached. if (best.equals(goal)) { return new Solution (initial, best, true); } closed.put(best,best); // Compute successor states and evaluate for (SlideMove move : best.validMoves()) { EightPuzzleNode successor = best.copy(); move.execute(successor); if (closed.contains(successor)) { continue; } scoringFunction.score(successor); EightPuzzleNode exist = open.contains(successor); if (exist == null || successor.score() < exist.score()) { // remove old one, if it exists, and insert better one if (exist != null) { open.remove(exist); } open.insert(successor); } } } // No solution. return new Solution (initial, goal, false); }

To understand why this code can be efficient, focus on the key operations:

Clearly we want to use a hash structure to be able to quickly determine if a collection contains an item. This works for the closed state, but not for the open state.

Can you see why?

So we use the folliwing hybrid structure for OpenStates:

public class OpenStates { /** Store all nodes for quick contains check. */ SeparateChainingHashST<EightPuzzleNode, EightPuzzleNode> hash; /** Each node stores a collection of INodes that evaluate to same score. */ AVL<Integer,LinkedList> tree; /** Construct hash to store INode objects. */ public OpenStates () { hash = new SeparateChainingHashST<EightPuzzleNode, EightPuzzleNode>(); tree = new AVL<Integer,LinkedList>(); } }

1.1.4 Demonstration

8 puzzle demonstrations: 8puzzle animations (DFS,BFS,AStarSearch)

maze demonstration: control-left click in Launcher

1.2 AStar videos

BreadthFirst Search Video

AStar Search Video

1.3 Version : 2015/12/15

(c) 2015, George Heineman