Go to Post This is the first (and, if this stuff keeps up, last) time FIRST has allowed high-speed projectiles in years. Please use some common sense, people. - Madison [more]
Home
Go Back   Chief Delphi > Technical > Programming > NI LabVIEW
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rate Thread Display Modes
Prev Previous Post   Next Post Next
  #4   Spotlight this post!  
Unread 26-01-2016, 21:34
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision processing on driver station (Labview)

Algebra was fun. Calculus was fun. And they are related, so I think your analogy is reasonable. I'll explain a bit more about the vision example and see if that helps.

The panel of the example gives you a control and feedback on a number of elements. At the top of the window are an original image, either of a file image or a live camera image. Next to it is the masked image, and to the far right are the measurements of various elements that the processing consider as good targets. Within the images, targets will be surrounded by a green box and labeled with scores for different measurements.

The Source tab control lets you decide which file images to use as the original image or which camera to use for live ones. It defaults to file images provided by FIRST.

The LED color section has three input ranges, one for the hue (pigment color), one for saturation (how much pigment), and one for Value (how much white is present). Those aren't the official definitions, but my HW store equivalent. The colorful knob is the lower and upper colors that correspond to the numbers, etc. Other inputs define the type of camera, min score that will be considered a target, and some the number of standard deviations to consider when doing calibration.

In the lower right are calculated info about all vision elements that pass the target filters with a high enough score.

---------------

To learn more about code on the block diagram, you probably want to click the yellow ? in the right edge of the toolbar. When you hover over a diagram object, the window will give a brief explanation of the icon with a link to reference help.

On the block diagram, the upper left loop is simply to check the selected folder occasionally and to update the listbox with the images. This portion will not be needed on the robot or in the dashboard, but is very useful for the example to function.

The lower loop is the primary one. The icons to the left allocate images that are used for color and mask images. This isn't typical for LabVIEW in general, but IMAQ has explicit image allocation and management. The first item in the left of the loop is where an image will be loaded from disk or acquired from a camera. The next icon does a color threshold on the original image using the color ranges from the panel. The result is a 1-bit mask of pixels that are in the color range and pixels that aren't.

Above this is a small bit of code for calibration of the color based on a line you draw in the original image. If you click and draw a small line on the colored area of an image, the subVI will build a set of colors for each pixel on the line and calculate the average and std deviation. It then updates the LED color with these calculated values. You may often want to tweak the ranges a bit afterwards, but it works well if the line is long enough.

The next processing step is to build a particle report from the binary image. This calculates the measurements listed in the blue array that is above the loop. The next step is to use those measurements to score each of the top N particles based on analytical measurements of the tape. Aspect ratio is a very simple example, area calculations are another, and there are several more.

The raw scores can be combined in any number of ways depending on how the team wants to use the information, and for the particles that are considered targets, one VI does annotations on the particle while another uses trig to estimate distance and converts to a useful coordinate system for aiming (with 0,0 at the center and with -1 to 1 ranges).

I'm happy to answer more specific questions.
Greg McKaskle
Reply With Quote
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 20:45.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi