So, for all practical FRC game reasons, the maximum timeframe that needs to be considered is 15 seconds. Now, if we are talking about non-FRC applications, you probably need to define what time frame you want us to consider.
This actually works pretty well, at least for the 15 seconds of auto. At high robot speeds, we have found that the use of encoders on all 4 swerve wheels and some fancy swerve math run in reverse and integrated over time is very accurate (to within a few inches), repeatable and predictable. By using encoders on all 4 drive motors as well as the encoders for the module rotation, we can rotate and we don’t need to move slowly to keep it accurate.
Of course you do need to know your starting position and orientation accurately, so lining up on the wall in a repeatable position (i.e. at the edge of the exchange zone tape or at the corner of the end wall) is a must. Any error in orientation will obviously multiply position error (laterally) based on the distance you travel. This is probably the biggest source of error we have found.
In the interest of adding to this experiment: One thing that we have done with FTC robots is use the tape on the floor for periodic position corrections. Two years ago, with Res-Q, we used fairly simple color sensors pointed at the ground under the robot to localize the robot when it passed over them. Using those tape marks, we could correct small errors in the initial orientation. There were two lines that we passed over. One was at an angle to the other, so based on the distance between passing over one and passing over the other, we could determine where we were along the diagonal line and therefore know our position (at least at that moment) very accurately.
Extrapolating from this experience, it seems like most vision systems should be able to pick out the red, blue and white tape lines that FIRST likes to use on the fields each year. If you have a fixed robot orientation on the field (i.e. swerve drive without rotating (you brought it up)), then the math for X and Y position using these landmarks would be fairly simple. If you allow for rotation, then it gets a little tougher, but it is still would be quite realistic to compare these landmarks with a 2D map of the field. If you keep track of your position reasonably well along the way, then you can check yourself to the map periodically to correct any error that crept in.
Nope. It’s 2m30s.
You need to have more faith in your drivers, Marshall…
Send more please.
Only if you stop arguing with them…
No, you misunderstand. We need them to create training data for our neural networks.
Is it just Marshall arguing with them that’s the problem? Am I allowed to?
That explains a lot…
It’s an adversarial neural network. Arguing is how we get better data.
You’re under the assumption that 900 wants to have drivers next year. full auto robot
Twitches
<rant>
At least there are proofs to back up parts of GANs now. Modern machine learning continues to dig deeper into its empirical hole with GANs and neural architecture search*. Most of modern ML is engineering and trial and error. It isn’t an exact science.
</rant>
*But I’m a hypocrite because this has been my main area of research focus for 2 years now.
We really just want more blue boxes in the manual.
Z24:
ACTION: Anything the robot does (at all)
Robots must state their intentions whenever performing an ACTION to the FMS through the OPERATOR CONSOLE during the match.
A worthy goal. Personally, I’m working on a betavoltaic-powered realtime clock for the RIO. I’d love to be the reason a “no radioisotopes” rule gets added.
Operator console says “go win the match, we’ll check back in a few”.
On the topic of the camera pointed down, What if you literally had like 4 optical mice on suspension to give a constant contact on the field, one mounted on each corner. Could you get useful data off of that? Why write your own motion tracking algorithm when mice manufacturers have written it already?
Relevant PDF: http://cdn.intechopen.com/pdfs/47040.pdf
However, it’s not very relevant to FRC. The issue is that optical mouse usually have maximum velocity measurement capability of less than 1 meter/sec, and will be highly likely to have issues tracking a regular surface like carpet at speeds anywhere close to that.
A simpler form of this was done by poofs this year with their scale height detection.
Camera on driverstation fed their auto scale height to aim elevator optimally.