Many teams agree that vision will be an important component of the game this year, specifically in lining up the robot for hatch placement.
But, how are robots actually going to achieve this? I have little to no idea, so I thought I’d ask you guys. To be clear, I don’t mean rotating the robot to face the center of the vision target. By ‘align’, I mean controlling the approach of the robot such that it is perpendicular to and facing towards the target.
My first thought was to determine the x position, y position, and skew of the target relative to the robot, use that to generate a path, and then follow it with some motion profiling. Pure pursuit would probably be ideal for this task, but most of our “path-ing” abilities are courtesy of Pathfinder V1. From some brief testing, optical distance estimation currently seems inconsistent at best, so I’m wary of trying something this cool.
So, what are other teams doing to tackle this controls challenge?
Note: I posed this question for differential drivebases, though I’d love to see solutions with strafe-capable robots too!
Other Note: If anyone wants to help me figure out why our optical distance estimates are inflated 1.75x that would also be much appreciated.
You do not want to generate paths on the roboRIO with Pathfinder V1. It will take years to just generate the path. But the easy way to align is just do a simple PID loop on the angle/offset of x
Just doing an angle loop doesn’t really align the robot with the target, it just gets it pointed towards it. The drive base will have to be near perpendicular for proper hatch panel placement. Also, you can dramatically decrease default path resolution to make for way faster generation, especially if the path consists of only two points.
The way our team is planning on lining up parallel with the vision targets is to have preset angles for each target on the field, and the driver simply selects the correct angle, the robot turns to that angle using a gyro, then we can easily follow the vision target because we are parallel to it (we do use a strafe capable robot though). This way, we don’t actually have to determine the angle of our robot relative to any given target using pose estimation.
In regards to your question about your optical distance measurements being off:
Not sure if you are using a Limelight camera, but we found through testing that the reported ty value (angle of target with respect to the crosshair) was not correct. We ended up sampling the reported ty versus actual calculated angles and fitting a function to adjust for the error. Within 8ft it kept the error in calculated distance to +/- 1"
My team does not use pathfinder v1 for our purepursuit, and we can create paths on the fly. I am curious about creating a path as rn we just turn torward the target and drive. I will test on the fly pathing for vision and post how well it goes as soon as build lets me steal their drivetrain.
We are using also using a limelight camera, was your adjustment function linear? Do you think it would be better/easier to fudge the reported angle, or to use one of the raw vertical pixel measurements and convert that to an angle?
we found through testing that the reported ty value (angle of target with respect to the crosshair) was not correct.
Did you mean “tx”? “ty” is the vertical offset.
“ty”, the vertical offset, is also reported and is what is used to calculate distance using some simple trig. The only constant when looking at the target from any part of the field is the height difference between the lens and the center of the target, so the vertical angle must is used.
Our robot is built with mecanum wheels, so we can strafe to align.
We will be using a camera pointed downwards just inside the front of our robot to detect the alignment tape. Once a light on our driver station shows green, the driver will be able to hold a button to start the auto-alignment routine. This will rotate the robot left/right until the tape is perfectly vertical and then strafe left/right until the tape is centered horizontally.
I don’t have the spreadsheet with me but for the samples we took a linear model had a good fit (R^2 ~= .998). We haven’t messed too much with the raw pixel measurements yet so I can’t speak for that.
While the adjustment function worked fine in adjusting our calculated distance, we’ve only checked it where the robot was dead-on to the target. I’m a little concerned that as the robot heading changes and the target skews, the adjustment model may falter. It seems that using the PNP function from openCV may be the best approach for determining accurate distance/heading.
We are considering using at least the ratio of the bounding rectangle detected to determine a rough angle to target - since we know the dimensions of the tape, we can estimate the ratio of height to width and as you move off center, that will change (visually the target will appear thinner as you move off center). Not sure yet how it will work, and need to determine somehow if we are angled left or right of center to move efficiently.