It’s the off season, and we’re trying to get things done that didn’t happen during the regular season. High on my list is improving autonomous programming and computer vision.
We’re using OpenCV on a Jevois camera. My main task right now is to use the vision targets to guide the robot to the right spot to place hatch panels. Fairly straightforward. Lots of teams have done it, and yet we didn’t get it done this season.
There are a few ways to approach this task, but here’s the one I chose:
Use an LED ring and low sensitivity settings on the camera to make finding the targets easy.
Within the image, find the two largest regions.
Draw an oriented bounding rectangle around each region. This bounding rectangle is assumed to be the location of the pieces of tape.
Find the corners of the rectangles in the images.
Associate the corners in the 2-D image with known objects in 3-D space, i.e. the known locations of the vision targets. I’m only using the highest two corners of each of the rectangles, although I could use more.
Use SolvePnP to determine camera position and orientation with respect to the known point locations.
I set the origin of the world coordinates to be centered between the two vision targets, at the level of the inner corners at the top of the rectangle.
My camera is using 320x240 resolution. I placed the camera 36 inches away from the targets, as close to facing straight ahead as I could. (The camera is on the side of the robot, and it’s difficult to place with extreme precision. I got what I think is close enough.)
So, I take some pictures, and find the points of interest in the image. The coordinates of the found points are
Using the known coordinates of the tops of the tape, and the camera matrix and distortion coeefficients provided with the Jevois camera, SolvePnP tells me I am 35.85 inches away. Not bad.
Unfortunately, not every frame comes out the same. The next frame had coordinates
Those coordinates are nearly identical. Two of the corners have a one pixel difference. With that one pixel difference on two out of the four points, SolvePnP says that the camera is 16.86 inches away, and all the angles are totally wrong.
Is there some trick to using SolvePnP? I know other teams have used it. I’ve used it myself without problems on robots outside of FIRST, and I’ve never noticed this extreme sensitivity to variation. (My experience with it is pretty limited. Just a few sample programs on a Raspberry Pi robot.) I have heard that there is some iinstability in the algorithm, and that SolvePnPRansac might do better. I’m planning on trying that next, but if anyone has some relevant experience to lend me on the subject, I would be grateful. That much variation due to a one pixel difference in lcation would render it pretty useless on a moving robot.