Does anyone know a reliable method for range finding other robots to calculate their position on the field without using LiDAR, comes can be found with Keypoint detection and pnp same with cubes and then localized to the field but robots are varying sizes removing pnp as an option so I was wondering if anyone has found an effective way to calculate their location with just a boundingbox for collision avoidance during automated task completion and navigation.
You could have alliance members put 36h11 tags on their robots. That would be a simple way of finding 2/5 of the robots. Though only a part solution, it would be something.
What do all robots have in common? Thinking about that might help provide a solution.
Bumpers (with team number) and RSL.
@Autumn-Ou you can create a vision processing program to look for either of these, the same way you’d do with cubes and cones.
I’ve thought about doing bumpers but they’re inconsistent sizes which means I would need to tie it to a known size for each robot which would then require doing Re-ID to remember which bot is which in a match along with taking a few quick pre match measurements.
most teams aren’t gonna put a 6x6 inch tag on multiple sides of their robot I think
Here is how I would do it (We may or may not do this, we have not yet decided how beneficial it would be.)
For the actual detection of the robots, I would use a yolo ml model to detect red and blue bumpers.
Now you know the height and angle of your camera, and you can deduce that most robots will be on the floor. Project a vector from your camera toward the center of the detected bumper, the location where this vector intersects with the ground is the approximate location of the robot.
You can also use a stereo camera to deduce depth. We had success with the OAK-D camera (a stereo-depth camera with a processor for running machine-learning vision processing) over the off-season.
Bumper heights would be only consistent thing on any given robot and even then it’s going to be +/- half of an inch.