# Grid-based autonomous

So what we’ve been doing (with partial success) is implementing a grid on which our robot keeps track of where it “thinks” it is based on a calculated angle in relation to the field, the angle that its wheels are facing in relation to its center line, and our encoder counts for distance.

For autonomous, we’ve been trying to put a waypoint system in to drive the robot. Basically we adjust the angle of the wheels to point towards the waypointsThe problem that we’ve been having is that it’ll approach a waypoint, sometimes it’ll veer off in the opposite direction off the field. Not quite sure what’s wrong here…but another problem is that it’ll sometimes approach a waypoint the “wrong way.” For example, here’s a diagram illustrating what we’re doing. The green path and robot is what we want, the red path is what we don’t want as it leaves the an impossible path to the next point.

http://img149.imageshack.us/img149/1525/fieldautonomoushv8.jpg
The image sucks, I know.

My current ideas are…

1. node prediction - use the waypoint ahead of the current one to modify the angle
2. simple inequality check - see if i’m outside or inside the path, outside is good, inside is not.
3. increase resolution of waypoints - make more waypoints in between.
4. set an angle goal at specific waypoints. thus, the goal now becomes reaching the point as well as slowly orienting the wheels in the given direction. (probably coupled with one of the above)

Anyone else do something like this before? My ideas seem ok…on paper, but I want some feedback before investing my efforts and time.

I believe wildstang first did this in 03, they have a flash animation on their website about it. http://www.wildstang.org/main/stangps.php

I don’t know many of the details, but the system we’ve implemented includes heading information for each waypoint. The path between is then interpolated based upon waypoint location and heading. It’s worked well for us.

We’re doing exactly what wildstang did with the whole grid system and waypoint navigation (strangely enough we never heard that they did it too, weird eh?)… yet if I’m going over my calculations right, I don’t think it should matter if the angle is immediately impossible…it still should turn the right direction. I’ll do a little more testing when I get back to the warehouse.

Uh just wondering, m. krass, could you elaborate on the waypoint heading info? Is it like a heading target that “should” be accomplished stored inside the waypoint?

Well, thats definitely the 03 field in that animation, so if that was the first year they did it, then you are correct.

This is likely something along the lines of what M Krass meant by heading information, and keep in mind that I’ve not implemented this type of system before, but…

If you consider a set of waypoints a1, a2, a3, …, aN.

Consider encoding in each waypoint information about the 2nd waypoint out (e.g. any waypoint aN has information about waypoints aN+1 and aN+2).

Currently it sounds as if your waypoint system only has information (location) about the next waypoint (aN+1).

What I would attempt is encoding in each waypoint not only the next waypoint’s location, but also the angle the wheels/robot need to be facing when arriving at that waypoint. Possibly even encode waypoint aN+2’s location in waypoint aN. This will allow you to heavily penalize selection of potential paths that the autonomous program might develop that bring you into your next waypoint on the wrong heading. Identify common “problem paths” and incorporate path-error handling subroutines (a set of general cases for which a small set of bad-case solutions will suffice).

Alternatively, plotting optimal paths for every subpath of 3(or whatever size subpath you choose) waypoints in the correct overall field direction would allow you to do some simple error correcting as you suggested.

I know alot of the above is kind of vague, but as I said, I’ve not implemented this type of system for a FIRST robot before. However, I’ve done a bunch of graph theory/programming, and really this is just a graph traversal problem with constraints of a physical system and game rules thrown in.

I might be more useful if I knew how you were implementing the current system, but hopefully you’ll find some useful nugget in my post.

Feel free to PM or IM me with questions or if you want further input.

I think the biggest question is the method by which you are generating paths. Is it on the fly, does it plan and then execute a path, is the optimal path something you give it and then allow it to modify dynamically, etc…? A little more information on the methods you are using at the base of your waypoint navigation system would be very helpful.

//Dillon

I suppose you could say we use something like this…

However, we use robot-centric relative coordinates in a script. The highest (supervisory) level of our autonomous nav system looks at these goals, and has the ability to blend them into a continuous velocity profile (while controlling acceleration and position as well).

Hey it won the GM design award… </boast> :]

-q

p.s. No, we don’t use a gumstix. A 40MHz 8-bit microcontroller (robot controller) is plenty… as long as you know your binary/integer math tricks to steer clear of floating point…

One comment - from your diagram it should not be “impossible” to get to the next waypoint if you come into that one wrong.

Your PID loop for steering should be adjusted such that above a certain angle, or when there is an obstacle ahead of you (a single sensor will easily tell you this) then the PID loop will increase the turning amount so that the robot tank steers to head directly to the next target. This is the way we have ours set up.

We ended up dispensing with the waypoint system. This year’s game doesn’t lend itself to it as much as previous years. We have a constantly updating position working off the walls and correlating with the encoders. The sole directive in the program is to continue moving forward when possible.

We use way points for our autonomous mode. We use a combination of gyro and wheel encoders and sonar “bumpers” incase we get too close to the wall.

What we do is have one way point that it aims for and a second way point it stops at.

For instance if you are going for x=23 and y=20 the robot calculates the angle to the target and adjust the wheels to aim towards the target. Then depends on the circumstances have a stop point at least 1.5 ft before the target way point. We do that so it doesn’t start to have weird angles as you get close to it and causing it to veer off.

Important to have this else you will experience the problems the first post is having.

It was working well except we started having weird problems at the start, we think it is because it possibly might be going through the driver mode before it goes into hybrid mode. Some CD discussions have suggested this as an possibility.

@Dillon
I understand putting heading data into the waypoints, but what could I do by putting information about n+2 as well as n+1 into waypoint n?

@Tom Line
Not impossible, but also not an optimal path. It used to veer off course when presented with that route, but I fixed that (I think I did…at least).

@Qbranch
Can you elaborate? Not sure what a “continous velocity profile” is…

[quote=Loubear;723356@Qbranch
]

The result of this approach to autonomous navigation is a continuous, smooth motion as opposed to the jerky motion characteristic of many PID systems. We use a multi-tier feedback system with lookahead to achieve this… you can see it in operation in the Midwest regional matches (best camera angles).

Also, it allows us to do interpolated arc motion for turns to enhance efficiency as opposed to the usual turn-go-turn type turns in many autonomous modes.

-q[/quote]

Whats wrong with floating point? the processor handles it for us just fine (=

looking at your plot of the robot, it looks like your heading is being compromised by an x-coordinate that is off. If you add a correction in for your x-position, it should work just fine.

…and looking at wildstang’s video we are doing something similar, except using encoders as our sole positioning sensor

I think you basically understand what I’m saying, we’re just using different terminology. In my explanation, N+1 would be the red waypoint, N would be the previous waypoint and N+2 would be the waypoint after the red one.

The way I envisioned using this is as follows:

A path from N->N+1 is going to have an optimal solution - a straight line, if you only include distance in your optimization measure. In my assumption, the heading data you include for N+1 is going to be the heading that points directly at it. The data you would be including about N+2 would be the heading for the optimal path from N+1->N+2. If the robot knows where its next optimal heading (N+1->N+2) is and the location of N+2 it knows the straight line that is the optimal path from N+1->N+2. This is information you can use that to preemptively adjust your angle or to recover from a position such as you described above where the robot veers off in the wrong direction.

At each point where the robot re-evaluates its location on the field you consider your current path and the optimal path. If your angle of approach to waypoint N+1 is sufficiently interior to the angle of approach of the optimal path from N->N+1 you attempt to correct towards the optimal approach angle gradually, so you keep moving towards the waypoint but try to “reacquire” the optimal path. As you get closer to a waypoint, the correction should become more extreme - however, if you reach a certain distance from the waypoint (let us say 3 ft) with an approach angle sufficiently interior to the optimal approach, assume that you can no longer acquire the optimal path to waypoint N+1, and and adjust your goal to one that can be achieved. In other words, consider N+1 visited and find a new intermediate waypoint along the path from N+1->N+2.

I suppose this could be said a bit more succinctly by saying “If your current angle of approach is sufficiently poor and you are within a certain tolerance of waypoint N+1, consider waypoint N+1 ‘visited’ and assign a new intermediate waypoint along the optimal path from N+1->N+2”. Basically you are re-assigning N+1’s position to (N+1)’ within a tolerance allowable by design such that your current path is CLOSE ENOUGH to the optimal path from N->(N+1)’ that your robot will not overcompensate and veer off path completely.

There are a number of heuristic approaches you could take to determining the position of a new waypoint dynamically, as well as some algorithmic approaches. I don’t know what would work best for your situation, or even in the above approach will work with your physical system (it should be pretty easy to implement in software, however). Provided you plan your waypoints such that your tolerance will not cause the robot to plot a path through the lane divider (there are some ways you could avoid the planning issues, such as special rules for any two connected waypoints that represent a cross-field movement), you should be able to use a system such as I’ve described to reacquire optimal paths with minimal difficulties. I think this should solve the problem you observed.

However, as I said, I’ve not implemented this before, so this is purely logical/algorithmic thought on my part and some significant consideration of behavior that should be observed using the system.

And, hey, even if you don’t find this useful it provided an excuse for me to think more fully through the dynamics and implementation challenges of an idea I’ve had sitting around for a while.

//Dillon

How do you go about creating an interpolation algorithm for turning/moving to next waypoint? The system I implemented resembles a jerky movement pid, then follows the next indexed path.

-Kiet

Loubear - you may want to add a simple if/then into your code. This is how we control distance to the wall on our robot.

if angle to next waypoint is greater than it should be (i.e. - you’ll turn into the wall like like your picture)

then current_steering_angle = current_steering_angle - 1;

if current_steering_angle < (optimum - 5)
current_steering_angle = optimum - 5;
else if current_steering_angle > (optimum + 5)
current_steering_angle = optimum + 5;

This will continually adjust your robot heading so it angles more towards the “wanted” or “intended” path to the next waypoint. Adjust the range (+5 and -5) to achieve the desired side no matter where you end up.

We use this but actually look at distance from the wall and adjust our heading to bring us to the correct distance (30 inches in our case).

Edit - I was just thinking that if you look at the desired angle to the waypoint beyond the one you’re aiming at, you’ll know which side of the current straight line to the waypoint you want to be on it. If you know you’ll need to turn left at the waypoint, then you should try to be to the right of it, etc. You could add that if/then to what I’ve put in above.

We don’t go that far. We simply floor our robot all the time and adjust the motor values with a pid only we want to turn.

You should be telling the robot to correct for position over angle when the robot is far away from the waypoint… As the robot gets closer to its desired position, angular correction has more of an effect over positional correction.

your angular correction should be a PID loop, except your P constant is acually a variable as a function of the distance from the waypoint.