View Full Version : Plotting Location w/ Accellerometer Project
Invictus3593
28-09-2013, 09:21
Invictus programmer here!
The other day i was doing some research on the Kinect sensor and found out that it has a 3-axis accelerometer in it. I'm thinking that it would be a neat project we could discuss/accomplish in this thread if we were to use the accelerometer data to calculate the "x" movement on the field and "z" movement on the field.
Since we don't really need "y" values, we can toss them out. We can use a time interval of about 100ms to calculate distance traveled with this equation: D=1/2at^2 where a is the accelleration and t is the time interval. After that, the only problem would be displaying a small diagram on the LV dashboard that has an aerial view of the field and a scale model robot that changes position every 100ms on the field diagram, based on the change in distance of the "x" and "z" values.
Anybody up to the challenge?
Invictus programmer here!
The other day i was doing some research on the Kinect sensor and found out that it has a 3-axis accelerometer in it. I'm thinking that it would be a neat project we could discuss/accomplish in this thread if we were to use the accelerometer data to calculate the "x" movement on the field and "z" movement on the field.
Since we don't really need "y" values, we can toss them out. We can use a time interval of about 100ms to calculate distance traveled with this equation: D=1/2at^2 where a is the accelleration and t is the time interval. After that, the only problem would be displaying a small diagram on the LV dashboard that has an aerial view of the field and a scale model robot that changes position every 100ms on the field diagram, based on the change in distance of the "x" and "z" values.
Anybody up to the challenge?
Great idea,
but how is the kinect related to this?
Can't it be done with the KOP accelerometer too?
BTW in what units is the output of the accelerometer? m/s^2, etc..
We can use a time interval of about 100ms to calculate distance traveled with this equation: D=1/2at^2
You left out the second term...
D = ½at2 + Vot
... and that formula is valid only if "a" is constant over the time interval.
Hate to be a party pooper, but... Due to the double integration, tiny errors in "a" will rapidly accumulate so that the computed position will quickly diverge from the true position.
Trapezoidal integration will help somewhat: Given t, x, v, and a at some point in time, and anew at some later point in time tnew, proceed as follows:
dt = tnew - t;
vnew = v + dt*(anew+a)/2;
xnew = x + dt*(vnew+v)/2;
Invictus3593
28-09-2013, 18:53
Great idea,
but how is the kinect related to this?
Can't it be done with the KOP accelerometer too?
BTW in what units is the output of the accelerometer? m/s^2, etc..
We use the Kinect fro vision processing due to the depth capabilities. The kop acceleromater should work too! You can read about what data the kinect gives here (http://msdn.microsoft.com/en-us/library/jj663790.aspx).
You left out the second term...
D = ½at2 + Vot
... and that formula is valid only if "a" is constant over the time interval.
Hate to be a party pooper, but... Due to the double integration, tiny errors in "a" will rapidly accumulate so that the computed position will quickly diverge from the true position.
Trapezoidal integration will help somewhat: Given t, x, v, and a at some point in time, and anew at some later point in time tnew, proceed as follows:
dt = tnew - t;
vnew = v + dt*(anew+a)/2;
xnew = x + dt*(vnew+v)/2;
So, theoretically, could you accurately plot a robot's location on the field using the equations you gave? and what other variables would I need get as inputs besides time and acceleration?
What if the time variable was such a very short amount of time, like 50ms, and you just took the average acceleration over that time period?
What if the time variable was such a very short amount of time, like 50ms,
Shorter integration times are generally better. You will still see some drift due to the double integration and the fact that the measured accel has error.
...and you just took the average acceleration over that time period?
That's what trapezoidal integration tries to do.
Play around with it in a spreadsheet or Maxima or Octave or SciLab and see for yourself.
Greg McKaskle
29-09-2013, 07:32
If you want to play with this, it may be worth checking to see if your laptop has an accelerometer in it. I know that my last Mac laptop did. It was used to park the hard drive before it landed during a fall.
If so, you can read that accelerometer in LV and play a bit.
I looked on the Apple app store and there are a number of free apps that let you see the accelerometer data and let see their interpretation of the data.
As for using this on the robot, the key cases to think about are the ones where the floor is uneven. Sitting still, the robot and its accelerometer are tilted. Gravity's 1g is no longer in the pure Z direction. It is imparting a force on the other axes. If your code doesn't calibrate and ignore this, it looks like the robot is constantly accelerating in the uphill direction. If you do calibrate it for that tilt, then drive the robot a foot and stop it again, the floor is likely tilted a new direction. Once again, sitting still, you will accelerate uphill.
It is an interesting problem, and accelerometers are certainly useful on the robot, but integrating them to identify speed or location without isolating them from the force of gravity is quite hard.
Greg McKaskle
..you can read that accelerometer in LV and play a bit.
Yes, thanks Greg, I should have included LV in my earlier post.
Invictus3593
30-09-2013, 23:50
If you want to play with this, it may be worth checking to see if your laptop has an accelerometer in it. I know that my last Mac laptop did. It was used to park the hard drive before it landed during a fall.
If so, you can read that accelerometer in LV and play a bit.
I looked on the Apple app store and there are a number of free apps that let you see the accelerometer data and let see their interpretation of the data.
As for using this on the robot, the key cases to think about are the ones where the floor is uneven. Sitting still, the robot and its accelerometer are tilted. Gravity's 1g is no longer in the pure Z direction. It is imparting a force on the other axes. If your code doesn't calibrate and ignore this, it looks like the robot is constantly accelerating in the uphill direction. If you do calibrate it for that tilt, then drive the robot a foot and stop it again, the floor is likely tilted a new direction. Once again, sitting still, you will accelerate uphill.
It is an interesting problem, and accelerometers are certainly useful on the robot, but integrating them to identify speed or location without isolating them from the force of gravity is quite hard.
Do you have any examples of how to get that built-in accelerometer in LV? we're not using any apple products, but it may be worth it to just check and see if we have one..
As for the tilt, if we had a gyro on there, and re-calculated the "down" direction whenever we weren't level, would that help fix the tilt problem? I mean, this year, the floor was totally level, but i guess it would go nuts if we get tipped or something.
Invictus3593
01-10-2013, 14:29
Also, I talked to my Physics teacher and he said that using this equation for each axis would work:
d=1/2at2+Vot
But we would have to get "vo" from the equation:
Vf=vo+at
Also, we talked about the case where the robot is turning, putting acceleration on the axis tangent to the curve and he used these few equations:
ac=(Vt2)/r
r=(Vt2)/ac
wf=wo+xt
rx=a
I wasn't ale to make sense of it all, but if someone can explain, this may be able to, somewhat, work!
Aaron.Graeve
01-10-2013, 14:51
The use of ac = (Vt2)/r and the other equations only apply if the robot is moving in a circle. I do not know the skill of your drivers, but most drivers I have seen do not drive in circles. The Vt refers to the velocity of the object tangential to the acceleration.
The wf = w0 + xt is similar to vf = v0 + at. It is the rotational velocity of a body under constant rotational acceleration. x is the rotational acceleration in radians/(s2).
In circular motion, velocity, acceleration, and position can be related to their rotational analogues by dividing by the radius.
Another interesting idea (that may be completely wrong) is to use a gyro with the forward position to create a set of vectors that might be used to find position in a polar system.
The code for trapezoidally integrating an acceleration to get distance was given in this thread in an earlier post (http://www.chiefdelphi.com/forums/showpost.php?p=1293556&postcount=3).
If your acceleration is in a plane (the plane of the floor), use the same concept to get your position in the plane:
Given t, x, y, vx, vy, ax, and ay at some point in time, and axnew aynew at some later point in time tnew*,
compute vxnew vynew xnew and ynew as follows:
dt = tnew - t;
vxnew = vx + dt*(axnew+ax)/2;
xnew = x + dt*(vxnew+vx)/2;
vynew = vy + dt*(aynew+ay)/2;
ynew = y + dt*(vynew+vy)/2;
... where x,y is the location of the accelerometer in the fixed plane of the floor. Note that you will have to convert your accelerometer signal from the vehicle reference frame to the fixed x,y reference frame of the floor, using the gyro to do the coordinate rotation.
As stated earlier, the errors will accumulate quickly and the computed position will diverge from the true position.
*Just to be absolutely clear for those who may be new to this: tnew is not one giant step from t. It is a very small integration time step (say 20ms) later than t. The repetition of this calculation over time is known as numerical integration.
Invictus3593
01-10-2013, 22:41
The use of ac = (Vt2)/r and the other equations only apply if the robot is moving in a circle. I do not know the skill of your drivers, but most drivers I have seen do not drive in circles. The Vt refers to the velocity of the object tangential to the acceleration.
The wf = w0 + xt is similar to vf = v0 + at. It is the rotational velocity of a body under constant rotational acceleration. x is the rotational acceleration in radians/(s2).
In circular motion, velocity, acceleration, and position can be related to their rotational analogues by dividing by the radius.
Another interesting idea (that may be completely wrong) is to use a gyro with the forward position to create a set of vectors that might be used to find position in a polar system.
Haha, he gave the ac = (Vt2)/r equation to use in the case that we were turning, which would throw off our real location; I guarantee you I don't drive in circles! If we updated the rotational acceleration every 10ms and took the average acceleration for that period, do you think would this be a short enough interval to be able to plot location semi-accurately?
Given t, x, y, vx, vy, ax, and ay at some point in time, and axnew aynew at some later point in time tnew*,
compute vxnew vynew xnew and ynew as follows:
dt = tnew - t;
vxnew = vx + dt*(axnew+ax)/2;
xnew = x + dt*(vxnew+vx)/2;
vynew = vy + dt*(aynew+ay)/2;
ynew = y + dt*(vynew+vy)/2;
...the errors will accumulate quickly and the computed position will diverge from the true position.
I apoligize for not acknowledging, my friend; I appreciate your input!
Using trapezoidal integration, would that eliminate the errors? Or is there anther way to do it without the problems you describe? I've read that with robotic probes that go into caves and such, they use this kind of plotting system, an accelerometer and a gyro..
Aaron.Graeve
01-10-2013, 23:37
Haha, he gave the ac = (Vt2)/r equation to use in the case that we were turning, which would throw off our real location; I guarantee you I don't drive in circles! If we updated the rotational acceleration every 10ms and took the average acceleration for that period, do you think would this be a short enough interval to be able to plot location semi-accurately? ... I suspect 10ms will be a short enough time, but that depends on a few hardware specific circumstances (gyro float and accelerometer responsiveness come to mind). Your team's driving style may also play a role in customizing the algorithm. I am interested to see how this will turn out. Also, have you considered encoders on your drivetrain? I hazard a guess that 2 encoders and a gyro can produce a position close enough for your needs.
Invictus3593
02-10-2013, 01:17
I suspect 10ms will be a short enough time, but that depends on a few hardware specific circumstances (gyro float and accelerometer responsiveness come to mind). Your team's driving style may also play a role in customizing the algorithm. I am interested to see how this will turn out. Also, have you considered encoders on your drivetrain? I hazard a guess that 2 encoders and a gyro can produce a position close enough for your needs.
I was thinking the same thing, if the accelerometer proves to be unreliable (which is what it's looking to be), encoders + a definition of how wide the wheels are can give an accurate representation of how far the robot has moved, theoretically, and the gyro can give orientation.
Greg McKaskle
02-10-2013, 07:04
I was looking for a decent app on the iPhone that would work to experiment with. The closest I can come to is one called Vibration. I was using it to measure how the cell phone buzzed and compare that to an external accelerometer reading.
Anyway, the app will show the three axes and it calibrates to subtract out gravity at the initial orientation. If you leave the phone sitting still and run a five second recording, you should get relatively flat lines and that's expected. The integrated area should be zero.
If you run the app and move the phone to the left and right, you'll see similar cancellation. But it probably won't quite zero. Next, during a sample recording, walk from your chair to the front door. Each step looks like a heartbeat, on each axis. And yeah, they sort of cancel out, but where is my predictor of my acceleration that tells me how far I walked. It is a tiny bump at the beginning of that heartbeat signal.
Then run a sample and simply tilt the device a bit. You'll see that a five or ten degree tilt offsets the line quite a bit. And worse, it stays there for the entire sample. The integration of the tilt is huge.
Anyway, if you can find the app, or something similar, it is helpful in understanding why IMUs are hard. After all, if it was easy, the phone or Garmin would do this instead of or in addition to GPS.
Greg McKaskle
Using trapezoidal integration, would that eliminate the errors?
Not with the accelerometer and gyro that come in the KoP. They're not accurate enough. The problem is the double integration (to get from accel to position). The small errors in the accel and gyro signal get integrated. The errors accumulate. After a short period of time, your computed position drifts away from your true position. The gravity problem Greg mentioned also contributes to errors in the accel signal in the plane of interest (the floor).
Haha, he gave the ac = (Vt2)/r equation to use in the case that we were turning, which would throw off our real location; I guarantee you I don't drive in circles!
The method I described applies to 2D motion in the plane of the floor, so it applies to turning (be it circular or not) as well as linear motion. The a=v2/r is not required.
If we updated the rotational acceleration every 10ms and took the average acceleration for that period, do you think would this be a short enough interval to be able to plot location semi-accurately?
You can answer this question by simulating a simple example in Excel or LabVIEW or Maxima or Octave or SciLab or any CAS tool or programming language of your choice. Assume you have a vehicle traveling in a perfect circle of radius R at a constant speed S. Then you know what the true ax and ay components of the acceleration are at any point in time. Do the numerical integration using those perfectly correct numbers. You should get almost perfect circular motion. Now introduce a small error into those ax and ay numbers, to reflect errors expected in the KoP gyro and accelerometer, and do the integration. You'll probably get something like a spiral instead of a circle.
I've read that with robotic probes that go into caves and such, they use this kind of plotting system, an accelerometer and a gyro
I don't know about the probes you are referring to, but if they use only an accelerometer and gyro to compute position they're probably much more expensive (and accurate) than those in the KoP.
Or is there anther way to do it without the problems you describe?
Placed properly, 3 unpowered omni follower wheels, each with an encoder, could theoretically be used to compute both position and rotational orientation -- without the need for a gyro or accelerometer. That would have a different set of problems.
Invictus3593
02-10-2013, 09:53
I was looking for a decent app on the iPhone that would work to experiment with. The closest I can come to is one called Vibration. I was using it to measure how the cell phone buzzed and compare that to an external accelerometer reading.
Anyway, the app will show the three axes and it calibrates to subtract out gravity at the initial orientation. If you leave the phone sitting still and run a five second recording, you should get relatively flat lines and that's expected. The integrated area should be zero.
If you run the app and move the phone to the left and right, you'll see similar cancellation. But it probably won't quite zero. Next, during a sample recording, walk from your chair to the front door. Each step looks like a heartbeat, on each axis. And yeah, they sort of cancel out, but where is my predictor of my acceleration that tells me how far I walked. It is a tiny bump at the beginning of that heartbeat signal.
Then run a sample and simply tilt the device a bit. You'll see that a five or ten degree tilt offsets the line quite a bit. And worse, it stays there for the entire sample. The integration of the tilt is huge.
Anyway, if you can find the app, or something similar, it is helpful in understanding why IMUs are hard. After all, if it was easy, the phone or Garmin would do this instead of or in addition to GPS.
Greg McKaskle
Dang, I just tried it out. So i guess the accelerometer route is a dead end.
What do you think of the encoder idea? I mean, you don't have to worry about gravity or forces or anything with them, just rotations of the wheels.
Dang, I just tried it out. So i guess the accelerometer route is a dead end.
What do you think of the encoder idea? I mean, you don't have to worry about gravity or forces or anything with them, just rotations of the wheels.
If you're using a properly-programmed true swerve drive*, you can compute position and orientation of the vehicle from the encoders on the wheels. But this introduces a different set of errors due to the dynamic response of the steering and wheel speeds in response to rapid changes in command. And all it takes to throw the computation off is one good bump that changes the orientation of the vehicle.
If you're using a skid-steer drivetrain, the relationship between the powered-wheel encoder readings and the actual vehicle movement during turns gets muddied considerably in ways that may not be easily predictable. Probably not a good solution.
*ie independent steering and drive for each wheel, with properly programmed steering angle and wheel speed for each wheel.
Mike Bortfeldt
02-10-2013, 11:06
While not accelerometer related, there have been a number of posts over the years describing issues with the "drift" associated with gyros over time and the error this causes when calculating field position. One method I've experimented with that seems to work well to compensate/eliminate most of this error utilizes two gyros. A high rate gyro for most turns (250 to 500 deg/sec), and a low rate gyro (30 deg/sec) for higher accuracy in slow curves and determining when the robot is stationary (for zero compensation). With the higher resolution of the low rate gyro, it is much easier to determine when you can automatically adjust the zero point. This method does break down when the robot is in continuous motion, but typically there are periods of time within a match (and certainly before the match starts), where the gyro can update its zero point. During bench testing, I was able to achieve a heading drift of under 2 degrees per hour when stationary. The heading calculation algorithm would automatically switch between the gyros at a 20 deg/sec rate (67% of full scale of the low rate gyro).
Mike
Invictus3593
02-10-2013, 23:25
If you're using a properly-programmed true swerve drive*, you can compute position and orientation of the vehicle from the encoders on the wheels. But this introduces a different set of errors due to the dynamic response of the steering and wheel speeds in response to rapid changes in command. And all it takes to throw the computation off is one good bump that changes the orientation of the vehicle.
If you're using a skid-steer drivetrain, the relationship between the powered-wheel encoder readings and the actual vehicle movement during turns gets muddied considerably in ways that may not be easily predictable. Probably not a good solution.
*ie independent steering and drive for each wheel, with properly programmed steering angle and wheel speed for each wheel.
What about tank drive? Would it totally throw everything off?
While not accelerometer related, there have been a number of posts over the years describing issues with the "drift" associated with gyros over time and the error this causes when calculating field position. One method I've experimented with that seems to work well to compensate/eliminate most of this error utilizes two gyros. A high rate gyro for most turns (250 to 500 deg/sec), and a low rate gyro (30 deg/sec) for higher accuracy in slow curves and determining when the robot is stationary (for zero compensation). With the higher resolution of the low rate gyro, it is much easier to determine when you can automatically adjust the zero point. This method does break down when the robot is in continuous motion, but typically there are periods of time within a match (and certainly before the match starts), where the gyro can update its zero point. During bench testing, I was able to achieve a heading drift of under 2 degrees per hour when stationary. The heading calculation algorithm would automatically switch between the gyros at a 20 deg/sec rate (67% of full scale of the low rate gyro).
I'm not too familiar with gyros. Do you have any sample code I could take a look at?
Tom Line
03-10-2013, 00:02
What about tank drive? Would it totally throw everything off?
I'm not too familiar with gyros. Do you have any sample code I could take a look at?
The gyro+encoder routine is one that many teams currently use for autonomous. However, over time the errors due to wheel slip, collisions, and gyro drift make it inaccurate enough that you can't really depend on it through the match.
Polar coordinates lend themselves to this use (angle + distance).
faust1706
03-10-2013, 09:34
The gyro+encoder routine is one that many teams currently use for autonomous. However, over time the errors due to wheel slip, collisions, and gyro drift make it inaccurate enough that you can't really depend on it through the match.
Polar coordinates lend themselves to this use (angle + distance).
We have a constant problem with it climbing, as has been previously said in this tread. To account for the, we had a driver control of a gyro reset. While doing that would defeat the whole project idea, you could use your last calculated point as your new origin and go from there, no?
Chris Hibner
03-10-2013, 09:52
The gyro+encoder routine is one that many teams currently use for autonomous. However, over time the errors due to wheel slip, collisions, and gyro drift make it inaccurate enough that you can't really depend on it through the match.
We have a constant problem with it climbing, as has been previously said in this tread. To account for the, we had a driver control of a gyro reset. While doing that would defeat the whole project idea, you could use your last calculated point as your new origin and go from there, no?
In past years when we needed high accuracy during turns and/or expect collisions or other sources of wheel slip or orientation change, we put the encoders on non-driven follower wheels. We put the wheels in line with the drive wheel, usually on a swing-arm and spring loaded them to keep in contact with the floor. The VEX omni wheels work great for this. In fact, doing it this way allowed us to remove the gyro, which was nice.
Invictus3593
03-10-2013, 13:12
...we put the encoders on non-driven follower wheels. We put the wheels in line with the drive wheel, usually on a swing-arm and spring loaded them to keep in contact with the floor. The VEX omni wheels work great for this. In fact, doing it this way allowed us to remove the gyro, which was nice.
From what everyone is saying about Gyros (the slipping and such), it sounds like this method of plotting position would be the most accurate
The next challenge would be plotting the position on a sort of mini-map on the driver's screen. How do you guys think one would go about accomplishing this? One would have to be able to map the location in scale to the diagram's width and height, right? Or is there a less CPU-intensive method?
I'd be willing to try and write up a separate program, but it'd be cool to get it on the dashboard, if possible
ps: thank you guys for all your help!
Aaron.Graeve
03-10-2013, 13:54
I think the mini-map idea both has merit and is not overly CPU-intensive. I suppose it would be similar to teams that use line overlays in aligning their shooter. If you keep the total x and y components (or r and theta, whichever floats your boat) as variables stored on the robot, it should be a relatively easy task to use those values to add a point image overlay to a stock background map on the dashboard. The overlay would probably not be that taxing, provided you are careful with your implementation.
In past years when we needed high accuracy during turns and/or expect collisions or other sources of wheel slip or orientation change, we put the encoders on non-driven follower wheels. We put the wheels in line with the drive wheel, usually on a swing-arm and spring loaded them to keep in contact with the floor. The VEX omni wheels work great for this. In fact, doing it this way allowed us to remove the gyro, which was nice.
Chris,
You might want to point out that 3 follower wheels are required if the vehicle has strafing capability.
Also, with 2 followers it is important to realize that they cannot discriminate among the three motions illustrated in the attachment (showing three skid-steer vehicles each with a different center of rotation). This can be mitigated by placing the followers so that the midpoint between them is on the center of rotation of the vehicle.
Greg McKaskle
03-10-2013, 17:31
Back in 2008, we did a demo of a robot called Nitro. It had three omni wheels and used the NI Motion planning SW to calculate trajectories. Anyway, for our demo we built a path builder and a path display using the picture control. You could also do this in an XY graph. Those are both pretty performant, especially the XY graph.
Greg McKaskle
Invictus3593
03-10-2013, 20:01
I think the mini-map idea both has merit and is not overly CPU-intensive. I suppose it would be similar to teams that use line overlays in aligning their shooter. If you keep the total x and y components (or r and theta, whichever floats your boat) as variables stored on the robot, it should be a relatively easy task to use those values to add a point image overlay to a stock background map on the dashboard. The overlay would probably not be that taxing, provided you are careful with your implementation.
I just tried to write up a point overlay in the dashboard like you said using the IMAQ Point overlay vi, but I can't seem to get it to actually plot a few points on the image, do you have any examples?
Back in 2008, we did a demo of a robot called Nitro. It had three omni wheels and used the NI Motion planning SW to calculate trajectories. Anyway, for our demo we built a path builder and a path display using the picture control. You could also do this in an XY graph. Those are both pretty performant, especially the XY graph.
Do you have any sample code left?
Tom Line
04-10-2013, 17:36
Back in 2008, we did a demo of a robot called Nitro. It had three omni wheels and used the NI Motion planning SW to calculate trajectories. Anyway, for our demo we built a path builder and a path display using the picture control. You could also do this in an XY graph. Those are both pretty performant, especially the XY graph.
Greg McKaskle
That's what we did in 2009 with our "PPS", or Pi-Positioning System. We could added waypoints and it would calculate the angle and distance, and feed that to our bot.
Wildstang has us all beat though, because they did it back in 2003 with their StangPS. You can find an archived thread here:
http://www.chiefdelphi.com/forums/archive/index.php/t-20273.html
Youtube Video here:
http://www.youtube.com/watch?v=SfDDq_4Hz6I
It's a bit different because it's crab, but much of it is applicable to tank to, since it's all angle + distance.
That's what we did in 2009 with our "PPS", or Pi-Positioning System. We could added waypoints and it would calculate the angle and distance, and feed that to our bot.
Wildstang has us all beat though, because they did it back in 2003 with their StangPS.
Here's the inverse problem. Instead of calculating the necessary trajectory to get to a desired point and heading, calculate the position if you know the 3 DoF motions of the robot over time.
http://www.chiefdelphi.com/forums/showthread.php?t=120022
Aaron.Graeve
04-10-2013, 18:45
I just tried to write up a point overlay in the dashboard like you said using the IMAQ Point overlay vi, but I can't seem to get it to actually plot a few points on the image, do you have any examples? Do you have any sample code left?
Unfortunately, I do not have any code to show and I do not have access to the libraries to recreate it. I may be able to get an example together when I go home this weekend and can get to my laptop, but until then, I can share nothing. I would only recommend that you make sure you apply the overlay after the dashboard clears the pre-existing overlays. It also may not hurt to test the overlays on a stock image (i.e. read an image from a file and try and apply the overlay; it will help make sure the process is correct.) I will post some code if I get a chance.
Joe Ross
04-10-2013, 23:44
You could also do this in an XY graph.
Here's how we used an XY graph to show a robot's path with a cursor showing current position.
Tom Line
05-10-2013, 00:06
Here's how we uses an XY graph to show a robot's path with a cursor showing current position.
Joe, what functional purpose does the X-Y graph have?
Joe Ross
05-10-2013, 11:20
Joe, what functional purpose does the X-Y graph have?
Just for display and debugging.
Invictus3593
05-10-2013, 19:41
I would only recommend that you make sure you apply the overlay after the dashboard clears the pre-existing overlays.
That may be what I missed, it already overlays the points on a bmp of the field found here (http://frc-manual.usfirst.org/upload/images/2013/1/Figure2-3.jpg).
Here's how we used an XY graph to show a robot's path with a cursor showing current position.
What System did you use to get the X and Y values of the robot's position?
Aaron.Graeve
06-10-2013, 23:55
If it already overlays the points, is there any issue?
Additionally, your robot x and y coordinates would probably be generated using the algorithm this thread was discussing earlier and pulled from the robot to the dashboard. Correct me if I am wrong, but I thing Joe used simple input controls to illustrate the point (no pun intended) of using an XY Graph.
your robot x and y coordinates would probably be generated using the algorithm this thread was discussing earlier
The equations and code for getting X, Y, and orientation from 3 omni follower wheels is discussed in Post 11 in this thread:
http://www.chiefdelphi.com/forums/showthread.php?t=120022
Invictus3593
07-10-2013, 09:06
If it already overlays the points, is there any issue?
Additionally, your robot x and y coordinates would probably be generated using the algorithm this thread was discussing earlier and pulled from the robot to the dashboard. Correct me if I am wrong, but I thing Joe used simple input controls to illustrate the point (no pun intended) of using an XY Graph.
-That's just it, I can't get it to overlay points with the IMAQ overlay, I have yet to try Joe's method, but it looks promising!
-I did see that, and I was thinking about using the SmartDashboard to read the encoder values from the robot, just the numeric values, then do the calculations on the driver station, reducing CPU usage.
The equations and code for getting X, Y, and orientation from 3 omni follower wheels is discussed in Post 11 in this thread:
http://www.chiefdelphi.com/forums/sh...d.php?t=120022
Thanks! I was wondering what the configuration of the wheel would look like. Also, the code really helped clear up confusion I had about how position would be calculated. So on the dashboard, one would just need to read the X, Y, and Q values and set them on the graph? The Arc & Chord pdf later in that thread helped a lot as well.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.