I’ve been trying to figure out how to do image tracking (I’m at home right now so I don’t have the camera with me, just fooling around with the vision assistant) and I haven’t the foggiest of where to even start. The tutorials aren’t a lot of help and I can’t navigate my way through the examples to figure out what they’ve done. Any help would be appreciated.
Here’s a file that will give you the bounds of each square.
I did it by extracting the saturation plane, thresholding that (to create a boolean image), and then used particle analysis (deselecting everything except the object bounds).
It may be that you would rather use the “center of mass” option, but I figured bounds would be most efficient in this case, since we know they are squares.
As a note, processing actual images (preferably captured with the actual camera on the robot) will usually provide the most reliable results.
When you’re creating the VI, you must select the inputs and outputs you want.
The result is the uploaded “find bounds of squares.vi”.
In this case, it also requires some further modification to see what data you have. The order of elements in the “particle analysis” is in the order you request the data.
The modified VI is the uploaded “find bounds of squares 2.vi”
If you want to find the center of these squares, just average the left and right bounds, and the top and bottom bounds.
Marshal, I’m more interested in the WHY of what you did rather than the “how”. What operations should we choose to minimize processor overhead? Why use the saturation plane? Why not the intensity, or hue, or one of the others? Why did you threshold and use particle analysis instead of using a tool like the find circle or find ellipse?
Where can I find something that describes the “whys” of using vision with labview?
Your best bet is to look through the Vision Concepts manual. It is in NI/documentation in the Start menu. Also, there are lots of vision examples to look through and instrument for performance comparison.
I chose the saturation plane because it provided even shading for all three squares. The intensity plane had each square at a different darkness (and actually varied at the edges of the squares), and the hue plane would not apply (all three squares are wildly different hues, and the hue of white is unpredictable).
If I was looking specifically for circles, I would use the circle finder. In this case it’s a very clean image, and so analyzing all the particles (without even an erode and dilate) will produce excellent results.
If it had been a real image, I probably would do some more processing to get a clean result.