Our team has decided to go with a mecanum drive for this year because of the ability to strafe for hatch or cargo placement alignment. However in the past I have noticed that it can be difficult to use encoders to track strafing. Does anyone know how to relate wheel rotations to strafing movement using physics?
Instead of using encoders on each wheel and trying to account for drift and slippage, you could place two follower or idler wheels on the center of the robot and place encoders on those. By tracking the encoders alongside the robot angle given by a gyro, you can have a much better estimation of robot displacement.
I would suggest just doing something like team titanium (1986) in 2017 to track distance. They used two omni wheels (one for x and one for y) with encoders on the bottom of their bot and tracked position that way
you can see them here (watch for a few seconds for a closer view):
We are using a gyro and accelerometer to “go straight” in any direction with the Mecanum wheels. That keeps us on track even if the robot rides over a bump or gets hit or someone kicks it.
If a wheel lags do to friction or whatever you might also get acceleration in the Y axis instead of only the x axis IOW you might not twist as much as move diagonal. The gyro takes care of the orientation of the robot the accelerometer allows it to go straight or recover if hit. At least to some degree.
Did you limit your acceleration while tracking odometry? I am working on an encoder based odometry solution for mecanum right now and was wondering if you had any more info.
Nope, and that was at least part of our issue - wheel slip was a big thing. I’m not entirely convinced that wheel slip isn’t going to happen in all cases, so we didn’t address it much (and we were able to place the gear with somewhat reasonable precision).
Theoretically possible if you can keep the wheels from slipping, but outside of carefully controlled auto routines with no defense, 'tain’t likely to navigate purely on wheel rotation.
Using follower wheels is a somewhat better idea, but better still is using all the wonderful visual cues FIRST has provided this year. There’s a white line leading into every scoring goal and the hatch cover pcikup. There is the SAME reflective target pair as well - though the cargo goals on the rocket are a few inches higher. FIRST has never made it easier to semi-automate the fiddly tasks through optical/vision sensors than they did this year. The only (non-foul) scoring functions which are not marked are CARGO pickup (which is a ball, the easiest game piece of all) and the start/end game functions of getting up and down and into and out of the HAB. The message is clear: If you haven’t figured out optical sensors yet, this is the year to do it.
I wanted to know strafing rotations so I could calculate using the vision tape how many rotations to go and then use encoder tricks in the PID loop. This way I’m not relying on the latency of the camera.
Assuming no slippage, the wheels turn the same amount to go a given distance left or right in a pure strafe as to go forward or reverse. (This will not be true for traveling on a diagonal.) But there probably will be more slippage, so the best way to do this is calibrate it yourself, by placing the robot on a piece of carpet as similar to the field carpet as you can get, strafing through a specific number of revolutions, and measure the distance traveled.