Pin Point Robot's Position (GPS)

I just want to know, how can i create a system where my robot can keep track of its position (x,y) and for e.g. the target it at (3,4) than all i have to say go to (3,4). Is it possible to do something like this with the sensor’s available if so can you please give a detailed procedure. Thank You

Yes, it’s possible and can be done. Team 341 did it last year.

You can use encoder counts to measure distance travelled, and a gyro reading to determine heading.

Your X/Y positions would be incremented each cycle by distance_travelled_this_cycle * sin(heading) and distance_travelled_this_cycle * cos(heading) (or reverse, depending on how you define X and Y; beware that sin() and cos() don’t exist…you have to write them or find someone who did [hint: whitepapers on delphi]).

To go to a position X/Y, simply compare each component with your current position, determine what heading to turn to and how far to drive, and do it.

www.kevin.org/frc has good examples.

Anyone has anyother ideas? which they have tried in previous years and worked successfully.

has anyone ever used anything that uses 4 or mabye even 6 distance sensors (probably ultrasonic) to find distances from the walls and find position that way?

Similar to the encoder/gyro idea, you could keep track of things using the acceleromotor and a gyro.

Our team has never really gotten things working in the past for positioning. Doesn’t help when you spend too much time on the camera the previous year.

Actually, they do exist in the mcc18 libraries nowadays. They’re not ideal if you’re running low on time though.

My confusion is: How do you calculate distance_traveled_this_cycle if the wheels are turning at different rates?
Even worse, how do you do it with center-lowered 6 wheel drive?

Could you not simply measure the encoder rates on the lowered center wheels? Since those will always be in contact with the floor, a read off of them should be fairly accurate.

No idea about an answer to the different turning rates…not planning on using a complex positioning system this year.

Every time the example navigation program loops, it is creating a vector in component form to express the position of the robot in relative to its position the in the previous loop of the program. Therefore, your position on the field at any particular point in time is the summation of all the relative position vectors created up until that particular point in time. Most teams just keep a running total of the X and Y components and throw away the vector data. Keep in mind, it is perfectly possible to keep track of all of those relative position vectors, then create a graph using a parametric equation (with X and Y as parameters of Time) to study later on how the robot moved during the match.

Back to the question at hand… To find the magnitude of distance traveled simply average the left and right side encoder counts. This holds true because this code is really tracking the motion of the center of mass of the robot, not the entire robot. As an example, think of 2 people running side by side, each holding the end of a 10 ft long stick. In the middle of the stick is a big red dot. When the 2 people run at the same speed, the red spot stays directly in between them. When one starts to slow down, the red dot still stay exactly in between the 2 men, but the angle the stick is at has now changed. The red dot signifies the Center of the robot, and how its motion is affected by linear forces on either side of it.

Here is an example of basic position calculator.

``````
//Using Accelerometer....

//position is the total linear distance moved since start up
velocity += GetAcceleration();
position += velocity;

``````
``````
//Using Encoders...

//position is the total linear distance moved since start up

position = ( GetLeftEncoderCount() - GetRightEncoderCount() ) / 2 ;

``````

Now throw what your total travelled distance is into this:

``````
distance=position-distance; //find distance travelled since last loop

//accumulate X components of relative position vectors

//accumulate Y components of relative position vectors

``````

Just make sure something like that loops very fast. (The IFI

default code loop should be fast enough)

So, I’ve told you how to find where you are. The balls in your court to find where you want to be and how to get there.

Good Luck!

EDIT: If none of this is making sense to you, check this out!

There are many ways to keep track of your position on the field.

The simple way is to measure how fast your robot moves and turns (feet per second and degrees per second for turning) and calculate where your robot is based on where you started, and keeping a running summation of all the commands (pwm outputs) you have sent to your motors, and the amount of time that has passed (SW loops).

This is easy, and its not very accurate (engineering tradeoff). Several teams used this method in the ‘stack attack’ game to hit the center of the wall as fast as possible in auton mode. In fact, the fastest robots used this method (ie: go forward for 1 second, turn left for 0.5S, go forward for 3 seconds…). If you are trying to hit a wall of boxes 12 feet wide you dont need mm accuracy.

Putting something on one of your wheels that counts revolutions is a more accurate way to measure distance travelled, as long as that wheel stays in contact with the floor and does not spin. Measuring the number of turns of wheels on both sides can be used to measure which way you are pointing.

You could also use the camera sensor to look for the beacon on the field, and use that to triangulate your position, esp if you can see both of them and measure your distance to one or both.

There are magnetic sensors you can get that allow you to measure the direction of the earth magnetic field. You have to place them on your bot where there is no steel or iron nearby, but they work pretty well. then you always know which way your bot is pointing, and these can be accurate to 1°.

accelerometers can be integrated to measure velocity (V=AT), and velocity can be integrated to measure distance (D=VT). This will work no matter what your wheels are doing (slipping, spinning, being pushed backwards), but its more complex and needs to be calibrated to get the best accuracy.

another way to figure out position is to sense the walls and railings and other objects on the playfield. If you keep a running estimate of where you think the robot is, you should know when to expect to run into a field boundary (instead of another robot), and you can use that information to increase the accuracy of your position.

you can get creative with the boundary sensors - simple: use contact switches that close when you bump into something, or more complex, like using an ultrasonic range finder to measure the distance to a boundary, or rotate it like radar to see whats all around your.

My favorite sensor is the yaw rate sensor. Its a solid state device that tells you how fast your robot is turning. This is very usefull for closed-loop steering control (another subject) but you can also integrate the yaw rate to get a compass (heading) reading.

Unfortunately, the best navigation system ever devised, GPS has two problems: 1. the accuracy is excellent for sailing across a lake or ocean, but +/- 2 or 3 feet is not that good for playing this game on the field, and 2. the signals are blocked by the roof of the arena :c(