How to use Vision to line up with target, not just angle towards the target?

Please excuse any ignorance in this post, my team is new to all of this! And thanks in advance for any advice!

So far my team has been able to use vision to give the center pixel of the target (just averaging the leftmost and the rightmost pixel), then we were able to convert that pixel into an angle that the gyro will turn the robot to using a closed loop with a PID.

The problem?

Let’s say the robot was 5 feet left of our target and angled not towards the vision tape, using the method described above, our robot could angle towards the target and drive to it. The issue is when our robot gets to its destination, it will be tilted at the same angle it needed to point to while driving towards the target. We want our robot to be directly in front, like we drove in a straight line, while our robot was directly in front of the target (say, if the angle was zero when the robot was run). We want our robots front bumper parallel to the target (hatch or cargo dump).

So how could we get our robot aligned perfectly in front of the target?


There’s a lot of ideas in this thread:

One thing I don’t think was mentioned in there is that so long as you can get pretty close, you could use the line on the ground and one or two sensors to get yourself perpendicular to the target.

Once you know your robot’s position relative to the target (either XY or magnitude+direction), a good portion of the work is done. solvePNP() is a good bet, but it’s really important to make sure your pipeline for obtaining contours is robust, otherwise it’s a GIGO situation.

There are a few directions you can go from here. Simplest, is turn and drive straight at a heading to put you to a point a couple feet in front of the target, then turn towards the target, then drive straight again. Another option if you use some sort of real-time path following algorithm is to generate a path on the fly that puts you in front of the target, and then follow it.

This second part is very dependent on how your team handles drivebase motion in autonomous, and I would suggest first integrating vision with whatever drive routines you have already.

Yet another option, although this one will likely be difficult to implement properly. You know the angles of the scoring and pickup locations on the field relative to your starting angle, there is straight towards the driver station, straight away from it, directly to the right, directly to the left, and the angles from the rockets. You could have your code figure out which angle the target the camera sees corresponds to (likely using the skew of the target to figure out the change in angle) and see what angle on the field that target is. From that information you could tell you robot, either through path planning or some sort of sideways movement what angle it needs to align to in order to pick/place parallel to the goal.

I think this is how we’re gonna go about it. It’s definitely not the simplest way, but seems pretty robust.