Precision Trapezoid Tracking

Currently, our camera is set to recognize rectangles, and it is able to pick up the target that we are looking for. However, to make up for the distortion (our camera’s got a relatively steep angle looking at the target) we currently have a noise level that is causing the camera to pick up things like ceiling lights and potentially other targets. I have a strong enough mathematical background that I can determine the relative side lengths for the trapezoid, but I do not know how to have the camera recognize points arranged in a trapezoidal formation (by picking out the robot’s distance from the height, we could then have a very specific shape to scan for). So, any suggestions for how I can work with the Particle code?

If you know and expect the image to be distorted, you may want to modify the test scores a bit. Again, you are trying to exclude the false image targets and include the real ones. I don’t believe the default Java code does the hollowness test, but that is key to ignoring other bright rectangles in the image. Also, another discriminator you can use is to use the bounding box of good scoring particles as the ROI (region of interest) for edge detection or other advanced localized processing. Once you have more edge info, you can look at various corners or edges and use the math to estimate the distance and location.

Greg McKaskle

Yep, sorry if I wasn’t clear, but I was more referencing a lack of knowledge on how to use the Java code. What I was thinking personally (since posting yesterday), was that it would be good to have a two-step scan. I understand how to implement the first step - just a simple scan for a rectangle which we already have working. Then, I figured based on the height of that rectangle I could calculate fairly precise vertices for what the target should have and test for which ROI we want (and distance) as the difference in distortion should allow us to only pick up the top target (instead of recognizing random other things and the middle and lower row targets). However, I don’t know if/how I can use the camera code to test for the positions of the vertices on the stuff it picks out. Thanks for helping by the way!

Ben