Vision processing on driver station (Labview)

Our team uses Labview, and I have been able to get the 2016 vision example working, but I cannot get the masked image to appear on our driver station. I’ve been told that the vision processing code needs to be put in a loop in a driver station project, but how exactly would I go about this? I’m sorry if this is a bit vague, but I’d be happy to provide any more details if needed. Thank you in advance.

The dashboard template has a vision loop that receives images and displays them. I think it is called loop 2. In the true case, it has a comment that says you may want to process and annotate here.

The example has a loop where it does the processing and a second loop at the upper left where it periodically looks for new images that have gone into a folder. What you want to do is to combine some code from the example with the dashboard code. There are many ways to do that. Ask questions if you get stuck.

Greg McKaskle

I have to agree, the example looks a little bit like calculus when you have a basic understanding of algebra. I’m trying to figure out how to implement the example into our program.

Algebra was fun. Calculus was fun. And they are related, so I think your analogy is reasonable. I’ll explain a bit more about the vision example and see if that helps.

The panel of the example gives you a control and feedback on a number of elements. At the top of the window are an original image, either of a file image or a live camera image. Next to it is the masked image, and to the far right are the measurements of various elements that the processing consider as good targets. Within the images, targets will be surrounded by a green box and labeled with scores for different measurements.

The Source tab control lets you decide which file images to use as the original image or which camera to use for live ones. It defaults to file images provided by FIRST.

The LED color section has three input ranges, one for the hue (pigment color), one for saturation (how much pigment), and one for Value (how much white is present). Those aren’t the official definitions, but my HW store equivalent. The colorful knob is the lower and upper colors that correspond to the numbers, etc. Other inputs define the type of camera, min score that will be considered a target, and some the number of standard deviations to consider when doing calibration.

In the lower right are calculated info about all vision elements that pass the target filters with a high enough score.


To learn more about code on the block diagram, you probably want to click the yellow ? in the right edge of the toolbar. When you hover over a diagram object, the window will give a brief explanation of the icon with a link to reference help.

On the block diagram, the upper left loop is simply to check the selected folder occasionally and to update the listbox with the images. This portion will not be needed on the robot or in the dashboard, but is very useful for the example to function.

The lower loop is the primary one. The icons to the left allocate images that are used for color and mask images. This isn’t typical for LabVIEW in general, but IMAQ has explicit image allocation and management. The first item in the left of the loop is where an image will be loaded from disk or acquired from a camera. The next icon does a color threshold on the original image using the color ranges from the panel. The result is a 1-bit mask of pixels that are in the color range and pixels that aren’t.

Above this is a small bit of code for calibration of the color based on a line you draw in the original image. If you click and draw a small line on the colored area of an image, the subVI will build a set of colors for each pixel on the line and calculate the average and std deviation. It then updates the LED color with these calculated values. You may often want to tweak the ranges a bit afterwards, but it works well if the line is long enough.

The next processing step is to build a particle report from the binary image. This calculates the measurements listed in the blue array that is above the loop. The next step is to use those measurements to score each of the top N particles based on analytical measurements of the tape. Aspect ratio is a very simple example, area calculations are another, and there are several more.

The raw scores can be combined in any number of ways depending on how the team wants to use the information, and for the particles that are considered targets, one VI does annotations on the particle while another uses trig to estimate distance and converts to a useful coordinate system for aiming (with 0,0 at the center and with -1 to 1 ranges).

I’m happy to answer more specific questions.
Greg McKaskle