# CS441 Systems Biology II Project 4 - State Machine Architecture Page History

## Key

• This line was removed.
• Formatting was changed.
Comment: Migrated to Confluence 4.0
Wiki Markup
```{center}

h1. State Machine Architecture

h4. Mary Fletcher and Lanya Butler
{center}

{quote}
A common pattern for robotics systems that perform complex tasks is to use a state machine architecture.  This week we wrote two programs to execute compound instructions, so we implement them following the state concept.  The first visits any number of specified waypoints, pausing at each.  To complete this task, we improve upon our velocity space navigation algorithm from last week.  The second program follows walls, turning at both interior and exterior corners, until detecting a circle in its path.
{quote}

h3. Point-To-Point

First we wrote a program to traverse a set of coordinate points. It uses the velocity space navigation algorithm at each step allowing for obstacle avoidance. After the robot reaches a goal location, it "beeps" and sets off for its new goal. We fixed the velocity space navigation algorithm, which we were using in project 03, so that the obstacle map draws the lines extending from each obstacle  correctly.

h4. Overall Structure

We restructured our code to follow a state machine architecture, which uses three steps: state update, navigate, and act. The state update step keeps track of the current target location. Navigate determines the best velocity (translational and angular) to reach that target, based on the robot's current location, position, and obstacles. Act sets the robot's velocity.

!pointToPoint_stateMachine.png|border=1,align=right,height=200!

h4. State Machine

The figure to the right shows a schematic diagram of our state machine.  When running the program, the user can provide any number of locations to which to travel, which are stored in an array in a {{robotState}} structure.  We start the robot in the {{GOTO}} state, where we use our velocity space navigation algorithm to move toward the destination point, specified by an index in {{robotState}}.  This algorithm will bring the robot to a halt at the destination location, but to ensure that it has completely stopped before starting the next point, we move to {{PAUSE}} once the robot has gotten sufficiently close to each target.

{{PAUSE}} decelerates until the robot is at rest, at which point, in {{updateState}}, we increment the index and go back to {{GOTO}}.  This loop exits when the index reaches the end of the loop, indicating that the program should terminate.

h4. Performance
This program successfully visits all points, but the motion is not particularly smooth.  The first video clearly demonstrates how abruptly the robot moves as it reaches each point.  The command was to visit the points (1, 1), (2, 1), and (0, 0), all points described in the robot's initial frame of reference.  Notice that it ends in approximately the same location as it started.  The fact that it rotates more than expected indicates that our angular velocity is too high.  This may be related to the fact that Mage doubles angular velocities.

The second video demonstrates that obstacle avoidance still works.
{multimedia:name=frodo_ptptriangle.mov}
{multimedia:name=frodo_ptpdodge.mov}

h3. Better Wall Follow

We wrote a program to have the robot find and approach a wall, left-hand follow it, and navigate internal and external corners by turning appropriately. We used RANSAC to model the wall and the control law from project 03 (introduced by Bemporad, De Marco, and Tesi in 1998) for the wall following. To set up for the wall following, we tell the robot to turn until it is perpendicular to a wall, move forward until it reaches the desired distance that it should keep between it and the wall, and then to turn until parallel with the wall.  These tasks are our first three states.  Once it exits the last, it never goes back into "find-wall" mode, though perhaps it would be appropriate to add an edge from other states if the robot gets lost (such as if {{EXT_CORNER}} does not exit after a full turn).

Within the wall-follow structure, the robot may enter either corner state, execute a turn, and return to straight wall following.  Whenever it turns, our stop condition is that the wall model's angle has reached the desired orientation.

{panel}
!wallfollow_stateMachine.png|border=1,width=500,align=left!

!wallModel.png|align=right,border=1,width=250!
{panel}

h4. Detecting Circles

As an extension, we added the rule that the robot terminates upon detecting a circular object (such as a marker post at the end of a maze).  We do circle detection analogously to our line detection - using RANSAC to iteratively select 3 points, model the circle described by them, and check to see if there are enough additional inliers to give credence to the model.  The robot should begin to stop if it sees a post .5m ahead.  Experimentally, we found that a post .5m away gives about 30 inliers.  If we just look in the front 60 degrees, then we have 170 readings from which to sample.  To get 99% success with a 30/170=17% inlier rate, we need

ln(1-.99) / ```

A common pattern for robotics systems that perform complex tasks is to use a state machine architecture. This week we wrote two programs to execute compound instructions, so we implement them following the state concept. The first visits any number of specified waypoints, pausing at each. To complete this task, we improve upon our velocity space navigation algorithm from last week. The second program follows walls, turning at both interior and exterior corners, until detecting a circle in its path.

### Point-To-Point

First we wrote a program to traverse a set of coordinate points. It uses the velocity space navigation algorithm at each step allowing for obstacle avoidance. After the robot reaches a goal location, it "beeps" and sets off for its new goal. We fixed the velocity space navigation algorithm, which we were using in project 03, so that the obstacle map draws the lines extending from each obstacle correctly.

#### Overall Structure

We restructured our code to follow a state machine architecture, which uses three steps: state update, navigate, and act. The state update step keeps track of the current target location. Navigate determines the best velocity (translational and angular) to reach that target, based on the robot's current location, position, and obstacles. Act sets the robot's velocity.

#### State Machine

The figure to the right shows a schematic diagram of our state machine. When running the program, the user can provide any number of locations to which to travel, which are stored in an array in a `robotState` structure. We start the robot in the `GOTO` state, where we use our velocity space navigation algorithm to move toward the destination point, specified by an index in `robotState`. This algorithm will bring the robot to a halt at the destination location, but to ensure that it has completely stopped before starting the next point, we move to `PAUSE` once the robot has gotten sufficiently close to each target.

`PAUSE` decelerates until the robot is at rest, at which point, in `updateState`, we increment the index and go back to `GOTO`. This loop exits when the index reaches the end of the loop, indicating that the program should terminate.

#### Performance

This program successfully visits all points, but the motion is not particularly smooth. The first video clearly demonstrates how abruptly the robot moves as it reaches each point. The command was to visit the points (1, 1), (2, 1), and (0, 0), all points described in the robot's initial frame of reference. Notice that it ends in approximately the same location as it started. The fact that it rotates more than expected indicates that our angular velocity is too high. This may be related to the fact that Mage doubles angular velocities.

The second video demonstrates that obstacle avoidance still works.

Multimedia
name frodo_ptptriangle.mov
Multimedia
name frodo_ptpdodge.mov

### Better Wall Follow

We wrote a program to have the robot find and approach a wall, left-hand follow it, and navigate internal and external corners by turning appropriately. We used RANSAC to model the wall and the control law from project 03 (introduced by Bemporad, De Marco, and Tesi in 1998) for the wall following. To set up for the wall following, we tell the robot to turn until it is perpendicular to a wall, move forward until it reaches the desired distance that it should keep between it and the wall, and then to turn until parallel with the wall. These tasks are our first three states. Once it exits the last, it never goes back into "find-wall" mode, though perhaps it would be appropriate to add an edge from other states if the robot gets lost (such as if `EXT_CORNER` does not exit after a full turn).

Within the wall-follow structure, the robot may enter either corner state, execute a turn, and return to straight wall following. Whenever it turns, our stop condition is that the wall model's angle has reached the desired orientation.

Panel

#### Detecting Circles

As an extension, we added the rule that the robot terminates upon detecting a circular object (such as a marker post at the end of a maze). We do circle detection analogously to our line detection - using RANSAC to iteratively select 3 points, model the circle described by them, and check to see if there are enough additional inliers to give credence to the model. The robot should begin to stop if it sees a post .5m ahead. Experimentally, we found that a post .5m away gives about 30 inliers. If we just look in the front 60 degrees, then we have 170 readings from which to sample. To get 99% success with a 30/170=17% inlier rate, we need

ln(1-.99) / (ln(1-(.17)

...

2

...

)

...

<

...

160

...

iterations.

...

This

...

seems

...

high

...

in

...

comparison

...

to

...

the

...

required

...

7

...

that

...

we

...

need

...

to

...

detect

...

walls,

...

but

...

it

...

is

...

still

...

not

...

computationally

...

intensive,

...

particularly

...

because

...

we

...

break

...

out

...

of

...

the

...

loop

...

as

...

soon

...

as

...

we

...

find

...

a

...

"good

...

enough"

...

circle.

...

Our

...

model

...

for

...

a

...

circle

...

is

...

(x0,

...

y0,

...

r),

...

where

...

(x0,

...

y0)

...

is

...

the

...

center

...

of

...

the

...

circle

...

in

...

local

...

coordinates

...

and

...

r

...

is

...

the

...

...

(in

...

mm).

...

For

...

now,

...

we

...

are

...

only

...

concerned

...

with

...

the

...

model

...

as

...

a

...

way

...

to

...

compute

...

inliers

...

since

...

we

...

are

...

only

...

concerned

...

with

...

the

...

presence

...

of

...

a

...

circle,

...

not

...

its

...

size

...

or

...

location.

...

We

...

compute

...

it

...

from

...

three

...

points

...

using

...

the

...

geometric

...

construction

...

we

...

found

...

on

...

mathopenref.com

...

.

...

This

...

gives

...

a

...

"recipe"

...

for

...

finding

...

the

...

center

...

of

...

the

...

circle

...

by

...

intersecting

...

two

...

lines,

...

which

...

is

...

easy

...

to

...

implement.

...

#### Performance

We get many false positive readings for our circle detection, including some corners. Perhaps a somewhat larger circle (ours has a 5cm radius) would be better since we could require more inliers to confirm the model. Since we get false positives, we changed our program to beep, rather than terminate, upon detecting a circle. The only other times it beeps are at transitions between the `FIND_WALL` states.

Wiki Markup
```{table}{tr}{td}
{multimedia:name=frodo_intwallfollow.mov|align=left}
{td}{td}
In this video, the robot finds the wall, executes an interior corner, traverses a large exterior corner, and gets confused at the next exterior corner.  It seems to detect something in front of it, leading to the {{INT_CORNER}} state.  After turning some more, it will eventually go back into the {{FOLLOW_WALL}} state so it can correctly enter {{EXT_CORNER}}, but since we do not include a direct link between corner types, the robot cannot switch between these states.

Notice that it beeps as it approaches the interior corner.  It sees the corners of the boxes, which are rounded at close to a 5cm radius, so, while undesirable, it is logical that it would detect them as circles.  As it takes the external corner, it beeps when facing the camera because it sees Lanya's legs, which is definitely expected behavior.

{td}{tr}{tr}{td}
{multimedia:name=frodo_extwallfollow.mov|align=left}
{td}{td}
This video shows a second, successful, attempt at taking the exterior corner.  Note that the sound is delayed; the first beeps occur as it approaches the wall, and it beeps four times at the post _before_ we turn it off.
{td}{tr}{table}

h3. ```

...

...

...