A* (A-star) is the typical textbook algorithm: it is a simple concept and can be easily explained. It is also widely known, one of the first algorithms you learn in AI. But for some reason it is rarely implemented properly.

A* does a graph search minimizing the node expansions, therefore minimizing the search time, by using a heuristic function. The heuristic function gives an estimate of the distance from the current node to the target node. For instance if you are searching for a path between two points in a city, you can estimate the actual street distance with the straight-line distance. It will rarely be accurate, but it will be good enough of an estimate to considerably speed up the search using A*.

What A* does is expand the most promising nodes first, according to the heuristic function. In the city example it would choose streets which go in the general direction of the target first and, only if those are dead ends, backtrack and try other streets.

For a more in depth explanation of A* try the Wikipedia entry. You can find plenty of pseudo-code implementations. But the thing about pseudo-code is that it leaves out the details, which in this case is what makes or breaks the performance. For a fixed-size graph of size N and branching factor b, the worst-case complexity of A* is O(bN)...or is it? For most implementations out there the worst-case complexity is actually O(bN2) because at each node expansion they do a linear search of the open list (for example here and here).

To avoid the linear search you need to use hashed collections for the open and closed lists, but without sacrificing the performance of getting the smallest element from the open list. For this you will need a multi-index collection for the open list: a hashed priority queue.

It is simple enough to implement your own hashed priority queue, just take a normal priority queue implementation and add a hash map. But a better solution is to use boost multi-index containers, or in particular a bimap.

Here is what the A* implementation looks like.

std::vector<node_type> path(const node_type& source,
const node_type& target) {
_open.push(source, 0, _h(source, target));
while (!_open.empty()) {
typename OpenList::value_type value = _open.pop();
node_type& node = value.node;
_costs[node] = value.g;

if (node == target) {
std::vector<node_type> path =
build_path(source, target);
cleanup();
return path;
}

if (!is_closed(node)) {
expand_node(node, target);
close(node);
}
}

// No path found
return std::vector<node_type>();
}

void expand_node(const node_type& n,
const node_type& target) {
std::vector<node_type> nodes = _graph.adjacent_nodes(n);
for (std::size_t i = 0; i < nodes.size(); i += 1) {
node_type new_node = nodes[i];
cost_type c = _graph.cost(n, new_node);
cost_type g = cost(n) + c;
if (g < cost(new_node)) {
_open.push(new_node, g, _h(new_node, target));
_parents[new_node] = n;
}
}
}


Nothing new there. The magic is in the open list implementation. The bimap we are going to use looks like

typedef bimap<
multiset_of<cost_pair, cost_pair_compare>;,
unordered_set_of<Node, NodeHash> > bimap_type;


What this means is that we have two collections: one for the costs and one for the nodes. The costs are not unique so we use a sorted multimap. The nodes are unique so use an unordered (hashed) set. The bimap takes care of mapping between nodes and costs. In other words we can get the smallest cost efficiently, we can search for a node efficiently and we can map between the two efficiently.

With this bimap, here is how the push and pop methods are implemented:

void push(const Node& node, CostType g, CostType h) {
typedef typename bimap_type::value_type value_type;
typename bimap_type::right_iterator it =
_bimap.right.find(node);
if (it == _bimap.right.end())
_bimap.insert(value_type(cost_pair(g, h), node));
else if (g < it->second.first)
_bimap.right.modify_data(it, _data = cost_pair(g, h));
}

value_type pop() {
if (_bimap.empty())
return value_type();

typename bimap_type::left_iterator it = _bimap.left.begin();
value_type value(it->second,
it->first.first,
it->first.second);
_bimap.left.erase(it);

return value;
}


As you can see the implementation using a bimap is not more complicated than the usual implementation, except perhaps the template code to deal with the bimap.

Some preliminary tests with a 500x500 grid graph give a 9x improvement for using the bimap in place of the property map open list. Your mileage may vary.

The complete source can be found on GitHub. You need the boost libraries to compile and you may need to modify the LIBS flag in the Makefile to match the installation path for boost. In the repository you can find both the property map open list implementation (the one commonly used for A* ) and the bimap implementation. The A* class uses C++ templates so that you can swap the search graph or the open list at will. Try both open list implementations yourself and see what kind of performance improvements you get with the bimap implementation.

comments powered by Disqus