So this year I’ve been working on coming with the concept of creating a mini map display that will tell the operator or coach where the robot is on the field. Currently it is set up with constant positions, if I had time to learn some vision processing I would have a camera determine exactly where the robot is during auto before it did anything and have the accelerometer do the work on determine the position the robot is at. This feature was on the back burner that was turned off for pretty much the entire season. I really wanted to see if this feature would work using an accelerometer. Here’s the issue I’ve run into, how do you tell position from an accelerometer? Using simple physics you would be able to tell how far you’ve traveled on the x axis and the y axis, but I’m just having trouble with determining the velocity correctly when the driver jerks around or when the robot is at a constant velocity. It doesn’t help that the accelerometer is not nice and neat and doesn’t read a value of 0 when it is not moving. Maybe I’m just looking at this situation wrong or maybe I’ve got the wrong idea.
I have not programmed it, but I stayed at a holiday inn express last night …
Here is an interesting tutorial
I assume that you have used the WPI Lib SetZero command to zero out the effects of gravity. Other than that, the tutorial says that the accelerometer will measure vibrations (compressor motor?).
I’m just having trouble with determining the velocity correctly when the driver jerks around or when the robot is at a constant velocity.
When the robot is moving at a constant velocity, then acceleration is 0. Just like you store position, you also need to store velocity. During each time slice, you take the Acceleration, and factor that into the new velocity. The new Velocity x Time should give you the new position. If acceleration is 0, then your new velocity is the same as your old velocity.
Jerking around is a little more difficult. Processing necessarily assumes constant acceleration since the last reading. Questions are:
- Does the device and/or WPI Lib give instantaneous acceleration or average the acceleration since the last reading?
- Even if instantaneous, what is the time period? What is the clock cycle of the device?
- What happens if you sample too quickly?
Here is the issue: Somehow, you have to measure the abrupt changes (jerks) in acceleration. If the change happens quickly (like smashing into a steel wall), then you might miss it if you do not sample fast enough. Let’s take an extreme example, let’s say you sample once a second. At t=0, your are stopped, and at t=1, you have 1 meter/second/second acceleration. How long over that time slice have you been accelerating? Your velocity at t=1 could be 1 meter/second (if accelerating during the entire time slice), or 0 meter/second if you just started accelerating. More frequent sampling reduces the error.
Smoothing also happens between the accelerometer and the analog input board. As the accelerometer changes voltage, there is a lag time before the analog input board reflects the changes. If the voltage is changing quickly, then you might only get an average, depending upon how fast the analog input board updates the digital number.
On average, you would expect all the “noise” in the system to cancel out, or not be material to your measurement. There is also the problem that you are only estimating. Over time, you will be off, and you need some type of other readings to re-calibrate. For instance, use a camera pointed down to figure out when you are not moving, and zero all your readings.
What kind of readings are you getting when the robot is not moving?
What kind of readings are you getting when the robot is moving at a constant velocity? Note: What may seem constant to you may have a lot of variance (noise) for the accelerometer, which should average out over time (say 1 second). Also, it may detect smaller changes than you can see.
I worked on a similar project before this season started, but I never made any headway as I ran into a few problems.
There are two methods to accomplish this, by my research: Accelerometer/Gyro and Encoders/Gyro
The Accelerometer/Gyro method would involve a lot of math and physics to get right, and it does seem like the most viable option with the least parts required, but (like rich2202 said) an accelerometer would read any acceleration, including getting bumped or going over a bump. Even moving forward and then reversing very quickly could mess up your positioning and give you incorrect readings due to the inaccuracy of the accelerometer. Also, you would have to figure out radial acceleration for turning, both the accelerometer and the gyro will be giving you readings there. Which one do you take as fact and which ones do you ignore and how?
The Encoder/Gyro method might be a more accurate option, but it has a little bit more physical impact on your robot. From my research, you would need to have 2 undriven omni-wheels (one north-south and one east-west) with encoders on them. These wheels aren’t like your driven ones because when your bot gets bumped, these would move with it, not against it, giving you an accurate reading. You then use the encoder data to measure distance traveled in each direction relative to the gyro heading at each point.
There was a team that used gyros and encoders in a previous game to plot field waypoints for their autonomous mode, but it never measured where the robot was, just told it where to go.
This would be really neat and I’m willing to help in any way I can!
I spent quite a few hours last season trying to use an accelerometer to determine position. No matter how many corrections I tried, I could not maintain a reasonable value for position for more than about 10 seconds. Any small error in acceleration measurement is compounded when you integrate to get velocity, and any small error in velocity is further compounded when you integrate to get position. All of these errors sum up very quickly. As far as I can tell, the kit accelerometer simply does not have enough precision to be used in this way (although I would really like to be proven wrong). The sample times could also be an issue if a higher-precision accelerometer were used, but I am convinced that the limiting factor when using the kit accelerometer is its precision, not the sample times.
No matter how many corrections I tried, I could not maintain a reasonable value for position for more than about 10 seconds. Any small error in acceleration measurement is compounded when you integrate to get velocity, and any small error in velocity is further compounded when you integrate to get position. All of these errors sum up very quickly. As far as I can tell, the kit accelerometer simply does not have enough precision to be used in this way (although I would really like to be proven wrong). The sample times could also be an issue if a higher-precision accelerometer were used, but I am convinced that the limiting factor when using the kit accelerometer is its precision, not the sample times.
This is why I lean more towards the encoder/gyro method. Too many variables and errors in the accelerometer. But I may find that the encoder method has them too, only one way to find out. I’m gonna convince a few members of my team to help me work on an encoder/omni-wheel/gyro project with a kit chassis and give you guys an update in a few weeks.
Any suggestions before I start?
EDIT: The next thing after a mechanical solution has been found would be how to plot all this data on a 2D x&y plane on your dashboard in real time. I’m guessing the DB would do all the calculations and just get a couple SD Variables for encoders and gyro from the bot?
Study up on your trigonometry and dimensional analysis.
On an unrelated note, make sure that your wheels are not pushing too hard on the ground, or you might be sacrificing some acceleration/pushing power come competition.
I’ve already been graphing test data (both encoders and a sample angle) in Microsoft Mathematics. I think I’ve got the graphing figured out.
Inputs:
ΔX and ΔY relative to the robot
Δangle relative to the last angle
Once one has these variables, they can use the rotateXY block in the geometry library to translate the whole thing to whatever angle is given by θ, the gyro.
Please correct me if I’m wrong on any of this.
On an unrelated note, make sure that your wheels are not pushing too hard on the ground, or you might be sacrificing some acceleration/pushing power come competition.
Good idea! Do you think they need to be spring-loaded in the case that a robot gets tipped a bit?
Here’s the code for the minimap. It currently has only the rotation function working. You can observe how the image moves across the field. You can probably tell where I went wrong.
2014 DashBoard Jester 1.0.zip (487 KB)
2014 DashBoard Jester 1.0.zip (487 KB)
Wow! awesome job! I was able to map that rotation to a knob control and rotate a picture, but once I’ve turned the picture, it keeps the box where it originally was.
Before rotation:
http://i.imgur.com/bQcvmwf.png
At a 60-degree angle:
http://i.imgur.com/LHNSa9L.png
Did you get this too when you tested? It may be because the image I used wasn’t the one you tested with.
What I found while testing the rotation display on the minimap was that you need to have a center of rotation lined up close. If you look at the code for the rotation, you can see that I set the center of rotation close to middle of the image. I thought I had folder called photo along with it…
Here’s the folder for the picture on the mini map. Place it in Program files and in the FRC Dashboard and the minimap should work, if you haven’t change the path to the files of the picture on the Dashboard.
Photo.zip (17.2 KB)
Photo.zip (17.2 KB)
This worked. For the movement in the XY plane, is the box created by the robot picture movable? If not, it may be beneficial to try having two pixmaps, one for the field and one for the robot. The robot pixmap could then be moved around the field by use of property nodes and rotated based on your rotation code.
I’ll see if I can get to work on that.