Skip to end of metadata
Go to start of metadata
Unknown macro: {style}

cite

Unknown macro: {font-family}
Unknown macro: {center}

Project 3: Following People and Walls

Lanya Butler and Mary Fletcher

We learned last week that sonars and infrared sensors, while reasonably affective for avoiding collisions, cannot be used to get a precise idea of the exact nature and orientation of obstacles. In this project, we first improve our pushPull program that makes the robot follow someone holding a flat surface. We then develop an algorithm for infinite-wall following, which uses RANSAC to derive a model for the wall. For both tasks, we use a laser mounted on the front of the robot to orient precisely parallel or perpendicular to the walls.

We chose to undertake an unrelated extension by implementing velocity space navigation, as introduced by Fox, Burgard, and Thrun.

Laser

The laser attached to our robot sends out 681 beams over 240 degrees, for a reading at about every 6 milliradians. It determines distances by reading the phase of the light wave that returns. Since light does not suffer from specular reflection as much as sonar, the laser can detect even walls situated at arbitrary angles to the robot's reference frame. It cannot, of course, detect transparent materials, such as glass.

Following People

Last week's pushPullTurn worked reasonably well, but the robot was easily "distracted" by surrounding walls. This week we use the laser's precision to remain oriented directly toward to our panel and track it alone.

turnTo

The sonars and infrared sensors do a reasonably good job of maintaining distance, but the laser is an important addition to be able to orient toward the panel to follow. Last week our resolution for rotation was 360/16 = 22.5 degrees, which is quite poor. With the laser, we can tell the robot to turn for smaller deviations, which gives the leader much more control over the robot.

turnTo is a simple program that orients the robot toward the nearest wall, or other reasonably large nearby object. At every time step, it finds the nearest wall by choosing the angle that has the smallest average laser reading over 40 adjacent laser increments. We chose 40 arbitrarily, and experimentally found it to work to have the robot turn as expected. Since we only use lasers, it will not find a wall directly behind it.

follow

Our program for following people is essentially our pushPullTurn program from last week with a modified version of turnTo replacing our old orientation subroutine. Instead of finding and zoning in on the nearest laser reading, our new getDirOfWall function only turns if there is nothing in front of it. It alternately checks to the left and right at increasingly large angles until it sees a nearby wall. Nearby walls are identified by observing a 40-reading cone with average distance under one meter.

Like last week, the robot only tries to correct its following distance when it has the correct orientation, so it alternates between distinct rotation and translation phases. We did make the turning phase more reactive this week, so it constantly reassesses the wall direction as it turns. This is better than how last week we determined an angle to turn, and then turned that much before checking any sensors again.

Audio Feedback

We kept audio feedback, but decided that the automated voice did not give our desired affect for the robot. Like last week, it only gives feedback upon entering the "stuck" or "happy" states, and it uses system calls, delegated to new threads, to make sound. We use the program SoX to play a sound clip of a beep (found on Free-Loops.com). To differentiate between states, we play the beep at a lower sample rate when the robot is stuck.

Performance

The robot can still get distracted, but it is possible to have it follow through doorways and along walls as long as you do not get too far ahead. If you lose the robot's focus and it does turn to a wall, it can be difficult to regain its attention unless you first drive it away.

Following Walls

When navigating buildings, it is generally useful to align the robot's orientation, and thereby travel direction, to the hallway. The task of traveling parallel to a straight wall is called "wall following." We wrote a program that follows a wall on the robot's left until either losing the wall or encountering an obstacle, at which point it terminates.

Modelling the Wall

While we could probably get good wall-following behavior by adjusting turnTo to turn at 90 degrees relative to the nearest wall, we chose to implement RANSAC and get an explicit model for the wall. This is a great opportunity to acquaint ourselves with the algorithm, but does mean that our program is more sensitive to needing a linear (not curved) wall.

In deciding how to model the wall, we chose a parameterization that directly gives us the measurements needed in our control law. These are rho and theta, depicted in the diagram at right. The lines in red show the sensor input, and green are the values used in the model. Using vector algebra and trigonometry, rho can be calculated from just two laser readings at a known arbitrary angle displacement. Theta is closely related to the "slope" of the wall, measured in the local frame.

Finding the Model

We use the RANSAC method to find the model from all of the sensor readings. We assume that the robot is approximately oriented correctly, so we only sample from -90 to -30 degrees. It is highly likely that any two readings from this range will give a good model, but due to irregularities in the wall it is still useful to consider multiple models. By running the algorithm with a high number of iterations, we found that usually at least 80% of the points given by the laser fit the best model, even as it approaches the end of the hall. To be generous, we assume that only 70% are inliers, so if we want the method to find the model 99% of the time, we need

ln(1-.99) / (ln(1-(.7) 2 ) < 7 iterations.

Seven iterations does in fact yield good performance.

Control Law

Given the distance and orientation of the wall to the robot, we use the control law from class (introduced by Bemporad, De Marco, and Tesi in 1998) in its simpler form. The translational velocity never changes from its desired speed, except when it must slow down to allow for a fast angle change. The angular velocity is given as

-k * (modelRho-desiredRho) / (desiredVel * cos(modelTheta) ) - beta * tan(model.theta)

where k and beta are parameters that specify the relative importance of maintaining a constant distance from the wall and staying parallel, respectively.

This does not guarantee that the angular velocity will change slowly or stay within a permissible range. In order to keep motion smooth, we use our capSpeed function from last week. As with the goto program, any change to the angular velocity must be accompanied by a proportional change to the translational velocity. We ended up separating our caps for min/max velocity from acceleration control so that we could adjust the two velocities in a consistent manner.

Since we use the simple form of the control law with the constant beta parameter, our program only works when the robot starts out oriented almost parallel, within approximately 15 degrees. As long as the robot is positioned well enough, it will adjust its distance and orientation as it drives along the wall, and terminate when the wall ends or it is blocked.

Following a Path

Note: I only call this "following a path" to try for some consistency with the rest of the project. It is not the best name because the algorithm is completely reactive, with no concept of a long-term path.

Our goto function from last week only works in the absence of obstacles. If the robot ever detects an obstacle in front of it, it will stop and exit the program even if there are other simple paths to reach the target. As an unrelated extension for this week, we decided to implement the velocity space navigation algorithm as introduced by Fox, Burgard and Thrun in 1997.

This algorithm is reactive: at each time step, the robot determines its new velocities based entirely upon the robot's location relative to the target and nearby obstacles. There is no memory of what has happened before, preplanned-path, or other concept of state. It differs fundamentally from our other motion functions because it does not use a control law. Instead, it considers a number of different possible velocities, rates them, and chooses the best. The rating function is a weighted sum of three components:

Unknown macro: {table}
Unknown macro: {tr}
Unknown macro: {td}
Unknown macro: {ul}
Unknown macro: {li}

Heading

The main goal of the program is to move to a target location, specified initially as a point in the robot's reference frame, and recorded in global coordinates. A good translation-rotation velocity pair will move the robot so that, at the start of the next cycle, it is headed directly toward the target. We compute the future location of the robot by plugging in the velocities to an equation for the integral of the motion. The future orientation is particularly easy to find: t0 + deltaTime * angularVelocity. We then find the difference between the future orientation and the orientation that would point the robot at the target (from the future location). Small angles are good, so we subtract the angle from pi to get the "score" for this velocity pair. Note that the returned value is in milliradians.

Unknown macro: {li}

Obstacle Avoidance

This component penalizes velocities that may collide with an obstacle. The value is the distance from the robot's trajectory to the nearest obstacle.

At the start of each cycle, we compute an obstacle distance array. The array represents the space around the robot, with the robot in the center. Each cell in the array represents a 3cmX3cm patch. Sensor readings give obstacle locations. We have a single reading create a linear obstacle starting at the measured distance and extending back along the direction of the reading. We only use the laser, so the robot has no knowledge of obstacles behind it. Obstacles are marked as 0's in the array, and we use the grassfire algorithm to label the remaining cells with their distance to the nearest obstacle.

For each proposed velocity, we map the robot's trajectory if it follows the velocity for 2 seconds. We index into the obstacle array based on this trajectory and report the minimum value encountered. This value, multiplied by the 3cm resolution to get the distance in mm, is the second term of the rating sum.

Unknown macro: {li}

Speed

This is very simple - higher translational velocities are preferable to lower speeds, except when near the target, at which point the robot should decelerate to avoid overshooting. We identify a desired velocity, by considering the current distance from the target and current speed. The value is the mm/s absolute difference between the proposed translational velocity and desired velocity. We subtract this value from 500 to make high values good.

Unknown macro: {td}


The marked angle is the difference between the future heading and the desired future heading.


A sample obstacle map, created when Lanya sat directly in front of the robot. Select "View Image" to zoom in.

Performance

We do not have all of the weight parameters optimally balanced, so the robot is inclined to take looping paths, but we are reasonably confident that each component is working correctly. In the video below, the robot is asked to go forward 2.5m, but there is a trash can in the way! It curves around to avoid the obstacle and reaches the desired destination. It initially overshoots, so it has to take time to situate itself correctly, by which point the dead reckoning was off.

What We Learned

We got to experience the affects of running slow code in real time. Before we implemented the grassfire algorithm to label the cells, the robot would overshoot its target and have difficulty getting back. Converting to the faster algorithm worked wonderfully in making the robot behave much more as we expected.

Labels