vision target sample 2012 help!

hey we downloaded the vision sample provided in the wpilib ,and the thing is we don’t know what does it do exactly ,we understood its general function which is identify the rectangle (which is covered with reflective tape ) ,so how do we know exactly that it managed to identify the target ,should it show any cues or something???(our axis camera is set properly and shows live fed ) …
and here is the sample …

#include "WPILib.h"
#include "Vision/RGBImage.h"
#include "Vision/BinaryImage.h"
 
/**
 * Sample program to use NIVision to find rectangles in the scene that are illuminated
 * by a red ring light (similar to the model from FIRSTChoice). The camera sensitivity
 * is set very low so as to only show light sources and remove any distracting parts
 * of the image.
 * 
 * The CriteriaCollection is the set of criteria that is used to filter the set of
 * rectangles that are detected. In this example we're looking for rectangles with
 * a minimum width of 30 pixels and maximum of 400 pixels. Similar for height (see
 * the addCriteria() methods below.
 * 
 * The algorithm first does a color threshold operation that only takes objects in the
 * scene that have a significant red color component. Then removes small objects that
 * might be caused by red reflection scattered from other parts of the scene. Then
 * a convex hull operation fills all the rectangle outlines (even the partially occluded
 * ones). Finally a particle filter looks for all the shapes that meet the requirements
 * specified in the criteria collection.
 *
 * Look in the VisionImages directory inside the project that is created for the sample
 * images as well as the NI Vision Assistant file that contains the vision command
 * chain (open it with the Vision Assistant)
 */
class VisionSample2012 : public SimpleRobot
{
	RobotDrive myRobot; // robot drive system
	Joystick stick; // only joystick
	AxisCamera *camera;

public:
	VisionSample2012(void):
		myRobot(1, 2),	// these must be initialized in the same order
		stick(1)		// as they are declared above.
	{
		myRobot.SetExpiration(0.1);
		myRobot.SetSafetyEnabled(false);
	}

	/**
	 * Drive left & right motors for 2 seconds then stop
	 */
	void Autonomous(void)
	{
		
		Threshold threshold(25, 255, 0, 45, 0, 47);
		ParticleFilterCriteria2 criteria] = {
											{IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false, false},
											{IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false, false}
		};
		while (IsAutonomous() && IsEnabled()) {
            /**
             * Do the image capture with the camera and apply the algorithm described above. This
             * sample will either get images from the camera or from an image file stored in the top
             * level directory in the flash memory on the cRIO. The file name in this case is "10ft2.jpg"
             * 
             */
			ColorImage *image;
			image = new RGBImage("/10ft2.jpg");		// get the sample image from the cRIO flash
			BinaryImage *thresholdImage = image->ThresholdRGB(threshold);	// get just the red target pixels
			BinaryImage *bigObjectsImage = thresholdImage->RemoveSmallObjects(false, 2);  // remove small objects (noise)
			BinaryImage *convexHullImage = bigObjectsImage->ConvexHull(false);  // fill in partial and full rectangles
			BinaryImage *filteredImage = convexHullImage->ParticleFilter(criteria, 2);  // find the rectangles
			vector<ParticleAnalysisReport> *reports = filteredImage->GetOrderedParticleAnalysisReports();  // get the results
			
			for (unsigned i = 0; i < reports->size(); i++) {
				ParticleAnalysisReport *r = &(reports->at(i));
				printf("particle: %d  center_mass_x: %d
", i, r->center_mass_x);
			}
			printf("
");
			
			// be sure to delete images after using them
			delete reports;
			delete filteredImage;
			delete convexHullImage;
			delete bigObjectsImage;
			delete thresholdImage;
			delete image;
		}
	}

	/**
	 * Runs the motors with arcade steering. 
	 */
	void OperatorControl(void)
	{
		myRobot.SetSafetyEnabled(true);
		while (IsOperatorControl())
		{
			Wait(0.005);				// wait for a motor update time
		}
	}
};

START_ROBOT_CLASS(VisionSample2012);


The sample code shows you how to process an image from a file (you need to capture the image from the camera instead). In order to easily identify the target, the first step is to filter the image by color. So if you illuminate the target retro-reflective rectangle with red light, you will want to filter out other colors leaving mainly red objects in the image (ThresholdRGB). The next step is to filter out small objects. The previous step may still leave an image with a lot of red objects. So filtering out the small objects decreases the amount of objects the subsequent code has to analyze (RemoveSmallObjects). The ConvexHull step is to “rubber-band” the object and solid fill it. This will fix the minor imperfections of the rectangle such as the hoop blocking part of the lower edge of the rectangle. Then the next step is to filter the objects with a set of criteria. The sample code uses a set of 2 criteria: the width of the object must be within the range of 30 to 400; the height of the object must be within the range of 40 to 400. Then the last step is to sort the objects so that the highest scoring objects are at the beginning of the list. So in case you still have false positive targets in the objects list, they would be at the bottom of the list. In theory, if you find more than 4 targets, you can ignore everything except the first four.

Hello! My team wants to use green LED’s on our robot, and I wanted to know if anyone knew the RGB color code for green. The program has the code for red, but I don’t know how to find the green one. I though it was 0 , 128, 0 but there are 6 numbers in the threshold area in the program. I’m a rookie programmer and haven’t worked with vision before, so I would really appreciate the help.

We use green LEDs too. But the value depends on your camera brightness setting and your environment lighting. We set the brightness to 20 and our values are (0, 50, 50, 200, 25, 125). It may be more useful to tell you how our lead programmer determined these values. He said he captured the camera image of the target to a jpg file and load the file into mspaint.exe (a handy tool came with Windows 7). On the top ribbon, you will find a eye-dropper icon. Click that icon and use that to click the color you are interest in. Then click the “Edit color” icon on the right end of the ribbon. This will open a dialog and show you the RGB values of the color. He found the color actually has some blue in it. So he widened the green range and also blue (not as big as green though) and minimize red. We found the values do very well for us.

BTW, you are supposed to be able to use the National Instrument Vision Assistant tool that came with the credit card flash drive to do similar things but we lost the card so mspaint does just fine for us.