1736 2019 Software Release

Robot Casserole is proud to present the code we have cooked up for Destination: Deep Space!

Ask away with any questions.


Did you ever encounter any issues with your solve PnP method of vision alignment? We didn’t implement solve PnP, but I witnessed many situations this year that would have made solve PnP very difficult/give you garbage data (i.e. when a hatch panel covers the bottom edge of the target on the rocket.) Additionally, are there any “gotchas” when it comes to implementing solve PnP? I’m very interested in incorporating some version of it into our path following program, but am still concerned that it will add too much complexity without a guarantee of reliable data.


This year was the first year we attempted to implement solvepnp and admittedly we did not actively use it during the season due to a variety of problems we had. I will say that running on a jevois at 1280x1024 resolution the measurements we got were pretty good (usually within an inch of the target). We decided against using the inner bottom most corners to avoid the hatch panel covering necessary points. As for the method itself I’d highly recommend looking at the ligerbots paper on the subject, it provides a nice robot oriented perspective. The majority of our problems were from time constraints in implementing the alignment itself, not the solvepnp method. If you do implement solvepnp my one tip would be that camera stuff is zyx not xyz (it took me an embarrassingly long amount of time to figure that out).


At first I thought you guys implemented solvePnP from scratch and I was about to call you guys insane for even attempting that. i am glad you guys used OpenCV’s implementation of it, for your sanity’s sake.

@hamac2003 If you are worried about obstruction to parts of a target, such as a hatch panel covering a corner, you could get around that by applying OpenCV’s function minimum bounding rectangle, whose corners would (approximately) yield the 4 true corners, including the obstructed one. Classical computer vision is all about using a toolbox of techniques to get around environment specific problems. Sometimes it requires a little creative thinking.

1 Like

This is a good idea, but using this method would make the tracking algorithm susceptible to grouping false positives (i.e. blobs of saturated light from shiny surfaces) in with the retro reflective tape, thus giving you invalid vertices. I definitely agree that this sounds like an ample solution, I just think that it would become much more difficult to ensure reliable performance once you try to run it in live match conditions (such is life with all vision processing algorithms :wink:)

@Treecko120 will likely have more details on false positive rejection once school gets out :slight_smile: - but you can pre-filter the detected blobs down to ones that are reasonably large, at the same height, a reasonable spacing, reasonable tilt…

True, but what I’m talking about is something like the following:

(I threw a mock glare on the left target)

In this example, you would be hard pressed to separate the rectangle on the left from the saturated reflection that is merged with it. It has around the same width, height, and angle of a desired target. You could use OpenCV’s approxPolyDP function, but I’d be worried about losing accurate width/height/size measurements (it even has “approx” in the name :grinning:)

So say you apply the minimum bounding rectangle solution suggested by AlphaFRC to the target on the left. You would get a rectangle width that is larger than the actual target, thus skewing the results of your solvePnP algorithm.

Again, I’m not trying to throw cold water on every suggested solution, I’m just reminded of all the problems we had trying to tune our own vision algorithm, and I’m interested in figuring out how to utilize solvePnP efficiently and robustly.

1 Like

It kind of depends on how common the problem is. If it occurs rarely and you just want to know which frames to throw out you could compare the convex hull to the bounding rects area and throw out any where there is a significant difference, if it occurs commonly and you need to use those frames you could calculate the area of the rectangle using just two of the corners and make a rectangle using the corners that create the rectangle of least area. We didn’t encounter this commonly enough to implement either of these so take them with a grain of salt.

1 Like

I think I understand too. To be clear - this is a question of “identifying image features robustly”, not solvePnP itself.

In my mind, that question simply comes down to, in the HSV-threshold case - what does it take to differentiate noise pixels from legit target pixels? This is where the thoughts about high camera resolution, creative filtering on blob shape, IR filters and LED’s, etc. come into play.

To suggest an input filter algorithm in your one particular case: How about “pick bounding rectangle with a fixed width/height ratio, within an acceptable tilt range, to maximum infill”?

I’ll echo the grain of salt - @Treecko120 played around with this a lot. However, the actual on-field utilization definitely wasn’t up to 254 levels :slight_smile: .

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.