Hey guys, my understanding is that object detection can put a bounding box around detected objects, but is there a way to localize that detected object in the overall map?
if you know the actual size of the object and it’s orientation, and the camera intrinsic, then you can calculate the object pose relative to the camera. if you know the camera pose relative to the robot, and the robot pose relative to the field, then you can calculate the object pose on the field. does that help?
Wait not really. So say the note is lying flat on the ground and I detect it – putting a bounding box around it. Am I treating the bounding box as the actual size of the object? B/c I don’t understand how the actual size of the object helps if the camera won’t be able to look at the object from a birds-eye view. And by orientation do you mean lying flat? What specifically is useful from camera intrinsic? I know our team is using limelight. Sorry if these are dumb questions. I am a beginner in FRC
You can either use HSV contour filtering or a machine learning model.
I think the limelight people released a machine learning model for last year so they probably will this year too.
You may want to check out this thread Limelight Game Piece Detection For FRC 2024
a good starting point for the transformations is something like Geometry of Image Formation | LearnOpenCV #. WPIlib has a number of 3d geometry classes that should be useful.
Thanks guys! Appreciate it
This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.