Using PhotonVision and 3D, I’ve managed to get the robot to be able to align to an AprilTag target at-will. The camera that I’m using for PhotonVision is offset about 5 inches laterally from the center of the robot, and I have accounted for this by adding a constant camera-to-robot offset. Everything works great, however, the robot is consistently ending up 6-7 degrees clockwise from “square” to the target. I feel like I’m missing something obvious…
Code is here:
Here’s a video demonstrating this. The robot is intended to be touching the red tape at the end, square to it.:
At a high level, what I’m trying to do is run a “crude” pre-planned path to get to around the correct point, and then using data from an AprilTag (such as camera-to-target transform) to align the robot very close to that position. The correct point is the red tape (as in, edge of bumpers just barely “touching” the tape).
Gotcha. I recommend using PhotonPoseEstimator (which takes in your robot to camera transform, accounting for your offset), and then follow how we have our AprilTag code example setup to use it. Then, use the pose you get PhotonPoseEstimator, feed it into a drivetrain pose estimator, and then use the pose from your drivetrain pose estimator to generate a trajectory from your current position to your target position.
If you want, I could try and debug the actual issue you have (could be quite a few things), but the above is the reccomended/ better way to do it in my opinion.
I’ve never been able to get a pose estimator to work. Whenever I input a non-odometry pose, it never does anything… I’d honestly prefer to figure out what’s wrong with my current implementation (or, what value I’m missing?) that’s causing this issue.
For help debugging:
I’ve tried changing the rotation in the CAM_TO_ROBOT constant, which has no effect or only worsens it.
It’s entirely possible that the path generation is the issue. I doubt it is given that it worked without an offset, but here’s the relevant code anyway.
Changing the theta controller’s PID values doesn’t seem to improve it at all.
So it looks like the camera is mounted 5in forward of the center of the robot, right? So if i’m looking down the robot’s +X axis (which is forward), the camera is 5in forward of the center? just based on this screenshot. If so, your camera to robot transform is actually -5in in X and 0in in Y, no?
Those cameras are not the ones I’m using. It’s taped and bubble wrapped to a pole at the top. Should have clarified, sorry. That camera is 5 inches to the left of the center, making it +5 in the Y direction. Here’s a picture for a better view. The center of the robot is approximately at the big orange wheel.
I believe it is correct as it’s aligning correctly in the X and Y axis. If I remove the offset, then the camera is pointing directly to the AprilTag, whereas when I have the offset, the camera is pointed 5 inches left from the AprilTag. And if you add 5 inches to the X value of the desired pose, that’s exactly what it should do.