I am from team 3603 and I have some questions about having an offset camera for vision processing. We are using Java coding with a Kangaroo running GRIP for the actual processing. We use the iterative template in Eclipse.
Our goal this year is to place a gear in autonomous. I had the code all written out with our vision processing code to let the robot adjust itself as it is coming into the gear hook. However, I realized that the robot wouldn’t be centering itself for the hook, it would be centering the camera for the hook.
Our camera is off-center by 4 inches, which would make us miss our target. What math would I need to use in order to account for the camera-offset? I looked at the FRC Control System, but I wasn’t able to understand what all was going on with it.
If you’re centering the camera, I’d guess that you’re aligning the camera’s center of view with the target, i.e. trying to get the gear’s retroreflective target to be aligned with a center pixel or pixels. If your camera is offset, simply physically align your robot in the correct position, and then get the location of the vision target in that image. It will likely be x pixels to the right or left of center, and then you can simply try to align the target with that point in the future, instead of aligning it with the center of the field of view.
I dont know what your code is but it is worth mentioning that for placing gears if you have an off center camera the spring will cut one of the targets in half based on my testing
Simply determine the horizontal field of view of your camera, then when detecting the gear lift, calculate the distance and instead of making the center point the center of the image, make it distance * tan(target_center_x * field_of_view / (image_width/2))
where tan_half_horizontal_field_of_view is the tangent of half of your camera’s horizontal field of view, camera_physical_offset is 4, target_physical_distance is the straight-line distance* from the camera to the target center, and setpoint_image_x will be the x coordinate in image coordinates that the target center will be at if your robot is lined up.
NB: calculating distance to the target if your camera is also pitched up is not trivial! So I’d recommend you have your camera vertical, and you’ll be able to use the arctangent trick on screensteps.
Assuming you have the horizontal angle from the camera to the target, the distance from the camera to the target, and the horizontal offset, you can do something like this: