Skip to end of metadata
Go to start of metadata

Brief Discussion of Bug Algorithms

After searching briefly on Google Scholar, I read the homeworks already completed by my classmates, planning to chose something unique. Ian included a link to a paper giving a comparison of several bug algorithms (found here). The bug algorithms are navigation schemes to move from the starting location to a target, specified by its coordinates. The robots have only local sensing, and range sensors (like sonars) are only used in more recent algorithms.

The bug family of algorithms started with Bug1. This algorithm tries to go straight to the target, but if it encounters an obstacle, it traces the entire perimeter to find the absolute closest point on the obstacle to the target. Once it reaches its starting point, it goes back to the closest point and attempts to drive straight to the target again. This works, but the robot may end up needing to circle the obstacle twice, perhaps passing right by the target on the first loop. Other algorithms in the family avoid this behavior in various ways, each of which performs better in some situations than others.

Navigating with a Map

The Bugs work to solve their problem without maps, but assume accuracy in dead-reckoning. The algorithm I chose to look at more in depth assumes that it has a map, but the robot cannot perfectly measure distances. It was presented by Simmons and Keonig in 1995 in their paper "Probabilistic Robot Navigation in Partially Observable Environments". Since the robot cannot be certain of its location within the map, they represent the map as a (discrete) state graph, and have the robot keep track of probabilities that it is in each state, using a partially observable Markov decision process. These probabilities obviously change as the robot moves; for example, if the robot moves East, the probabilities of the states to the east of the old candidates should increase. They can also change based on the robot's environmental sensor readings, such as detecting landmarks or heading into a wall. Every state has a probability of having these readings occur, and the probability that the robot is in the state is updated based on this.

When deciding how to move, all potential states are considered. Since the entire map is known, every state can report the single command (turn left 90, turn right 90, go forward) that the robot should take if it is in fact in that state. The probability that the robot is in that state determines the weight given to the state's vote. The command with the highest score is executed. This scheme is useful in situations such as when the robot may be in either of two parallel hallways and would want to proceed forward to the end of the hall regardless of in which hallway it actually is.

These commands are really short-term goals rather than explicit instructions to execute since the map cannot model temporary obstructions that the robot needs to be able to navigate. This problem is much like the problem that the Bug algorithms solve. Simmons and Koenig, however, use a potential fields algorithm, which they attribute to Arkin's 1987 paper, "Motor Schema Based Navigation for a Mobile Robot: an Approach to Programming by Behavior". This approach seems to know the positions of obstacles, so that every point may be assigned a direction vector as the sum of an attraction toward the target and a repulsion from the obstacles. I assume that this requires the robot to make a map of locations of obstacles based on sensor readings.

This model is good only for standard buildings since that keeps the map small. If the environment is less standard, then discretizations must be smaller, making the model grow quickly.

It is fascinating to understand the methods developed to handle uncertainty in a robot's measurements.