sorry about being so late on this reply:
Quote:
Originally Posted by Greg McKaskle
.
The aspect ratio is based on the width and height of the bounding rect, and to make it a bit more robust to distortion, it also uses something called the equivalent rectangle.
|
How I went about this issue was also incorporating an aspect ratio. I took my 4 corners, 0, 1, 2, 3 which correspond to top left, top right, bottom left, and bottom right, respectively. I did (x^2 + y^2)^.5 to get distance between two points. Just the simple distance equation where x and y are pixel coordinates. I did that for the 4 sides. Then I added the top and bottom distances and divided them by the sum of the right and left.
If your program relies on being perpendicular to the target, meaning straight in front of it, then I'd suggest using image moments. Perspective images hardly alter the center of the target with this method.
What I did with this aspect ratio is differentiate between the 2 and 3 point target. The longer and thinner the target, the higher my aspect ratio. I created a simulation program that simulates the field (might I add with pyramid that I will be uploading on here probably tomorrow). I did this to easily decide what a good lower limit of the aspect ration is. I discovered 2.4 was the magic number. I also added an upper limit to the aspect ratio, which turned out to be 4. If the target I find does not have an aspect ration between these two values, I call it a two point then send the driver the image moment's x rotation so the robot may turn towards the 3 point, even if the camera is unable to detect it.