|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: Tracking Rectangles with Perspective Distortion
It intentionally doesn't use high level shape detection so that it is more accessible.
The rectangularity score is based on area/bounding rectangle. The aspect ratio is based on the width and height of the bounding rect, and to make it a bit more robust to distortion, it also uses something called the equivalent rectangle. The equivalent rectangle uses the area of the particle and the perimeter of the particle and solves for 2X+2Y=perimeter and X*Y=area. The hollowness counts the pixels that are on for each vertical column of pixels and again for each horizontal row of pixels and compares those counts to thresholds that expect a strong outer band and weak inner band. Each of these is scaled so that 100 is a good score and lower is not as good. The initial cutoffs are just based on some initial images and are very easy to change. Can you think of other simple geometric measures that the code could score on the targets? Greg McKaskle |
|
#2
|
|||
|
|||
|
Re: Tracking Rectangles with Perspective Distortion
Quote:
What I would like to work out is some system that detects all edges on the image and if the edges seem to touch (or are with in a few pixels of each other), then they would be considered a corner. If 4 corners could be found consecutively, then we have a quadrilateral which is the shape we want to analyze. The only thing that stinks about even trying to start my approach is that the edge finding ability of the vision assistant can only find 1 edge per run and you can only run it in the 4 main directions, left-right, right-left, up-down, down-up. Once each of those finds an edge, I cannot detect anymore edges unless I can find a way to subtract those edges from the image and then keep rerunning the edge detection. If I could at least do that, then I could detect the shapes and do the same measurements and calculations we are now. If you can think of anything about this, that would be helpful. I do have a question about edge detection though. Does it work off of how many pixels in a column report an edge? For example, if all the pixels in column X say they have found an edge, then is the edge found? If some pixels in columns X, Y, Z (columns that are right next to each other) report an edge, then would that be an edge but be partially diagonal? If that is the case, I wonder if and how we can play with that so we can detect edges that are really skewed. Just thinking out loud. Again, thanks for the help! |
|
#3
|
|||
|
|||
|
Re: Tracking Rectangles with Perspective Distortion
I opened the NI Example Finder, under the Help menu, and searched for Straight Edge. If you set the Number of lines to 2 and drag a rectangle around the image, it will locate both the top and bottom lines. You most likely want to use the mechanism in the white paper to find the bounding box of particles, then use the straight edge to get precise fits to the edges.
Again, this will be a bit tedious, your original image had stronger edges between the board and the background than the tape and board. So without a good bounding rectangle, the edge detection may do what you ask and not what you want. Contours are another way to go with this. The attached image shows the contour and the graph to the right shows the sharp corner transition. You may want to use the bounding box to set the ROI to where you expect each corner to be and process then one at a time, and then recombine. Greg McKaskle Last edited by Greg McKaskle : 26-02-2013 at 10:22. Reason: wrong image |
|
#4
|
||||
|
||||
|
Re: Tracking Rectangles with Perspective Distortion
sorry about being so late on this reply:
Quote:
If your program relies on being perpendicular to the target, meaning straight in front of it, then I'd suggest using image moments. Perspective images hardly alter the center of the target with this method. What I did with this aspect ratio is differentiate between the 2 and 3 point target. The longer and thinner the target, the higher my aspect ratio. I created a simulation program that simulates the field (might I add with pyramid that I will be uploading on here probably tomorrow). I did this to easily decide what a good lower limit of the aspect ration is. I discovered 2.4 was the magic number. I also added an upper limit to the aspect ratio, which turned out to be 4. If the target I find does not have an aspect ration between these two values, I call it a two point then send the driver the image moment's x rotation so the robot may turn towards the 3 point, even if the camera is unable to detect it. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|