Vision Failing

Our vision system seems to be terribly unreliable. It works on and off, but never completely. The objective is to have it aim towards the target and then shoot. When it can fine the target, this works quite well. Unfortunately, it cannot seem to find the target most of the time. It is extremely picky. We use a green ring of LEDs from superbrightleds.com to illuminate the targets. We try to find the highest target that is also roughly rectangular (to check that it is rectangular (or at least a parallelogram), we divide the particle area by the lengthxwidth and make sure that it is close to 1.) Here is our function (called aimNFire). Please note that we drive back until the target is around 33 pixels wide because that works out to around 12 feet, which is the only point where we have data on how the shooter needs to be oriented.


team1257LCD->Printf(DriverStationLCD::kUser_Line2, 1, "Aim n' Fire!");
	team1257LCD->UpdateLCD();
	
	//Threshold threshold(0, 120,  100, 255, 125, 255);
	//Threshold threshold(19, 87, 42, 84, 03, 255);
	//Threshold threshold(0, 75,  100, 255, 0, 60);
	//Threshold threshold(0, 80,  0, 255, 55, 220); // tolerant threshold
	Threshold threshold(0, 80,  0, 220, 55, 220); // competition threshold
	//Threshold threshold(0, 255, 0, 255, 0, 255);
	ParticleFilterCriteria2 criteria] = {
										{IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false, false},
										{IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false, false}
	};
	
	AxisCamera& camera = AxisCamera::GetInstance("10.12.57.11");
	camera.WriteResolution(AxisCamera::kResolution_320x240);
	camera.WriteCompression(20);
	camera.WriteBrightness(0);
	//camera.WriteExposureControl(AxisCamera::kExposure_FlickerFree50Hz);
	//camera.WriteExposurePriority(0);
	camera.WriteWhiteBalance(AxisCamera::kWhiteBalance_FixedIndoor);
	
	bool driveBack = false;
	Timer robotTimer2;
	robotTimer2.Start();
	while(true/*IsAutonomous() || IsOperatorControl()*/)
	{
		if(robotTimer2.Get()>= 10)
			break;
		
		ColorImage* image = new RGBImage;
		camera.GetImage(image);            // get the sample image from the cRIO flash
		BinaryImage *thresholdImage = image->ThresholdRGB(threshold);   // get just the red target pixels
		BinaryImage *bigObjectsImage = thresholdImage->RemoveSmallObjects(false, 2);  // remove small objects (noise)
		BinaryImage *convexHullImage = bigObjectsImage->ConvexHull(false);  // fill in partial and full rectangles
		BinaryImage *filteredImage = convexHullImage->ParticleFilter(criteria, 2);  // find the rectangles
		vector<ParticleAnalysisReport>* reports = filteredImage->GetOrderedParticleAnalysisReports();  // get the results
	   
		// Identify the highest rectangle
		ParticleAnalysisReport basket;
		basket.boundingRect.top = 10000;
		basket.center_mass_x = 0;
		for(unsigned int i = 0; i < reports->size(); i++)
		{
			if(reports->at(i).boundingRect.top < basket.boundingRect.top && reports->at(i).particleArea / ((float)reports->at(i).boundingRect.width * (float)reports->at(i).boundingRect.height) > 0.7917011257314159)
			{
				basket = reports->at(i);
			}	
		}
		
		// Now, angle robot towards the target
		if(!driveBack)
		{
			if(basket.center_mass_x != 0 && basket.center_mass_x <= 320) // We have valid data!
			{
				double angle = (abs((double)basket.center_mass_x - ((double)image->GetWidth() / 2)) / ((double)image->GetWidth() / 2)) * 22.5;
	
				if(basket.center_mass_x < image->GetWidth() / 2) // To the right of us
					angle *= -1; // Negate angle
				if(angle < 0)
				{
					team1257Robot.SetLeftRightMotorOutputs(0.35, -0.35);
					Wait(0.001);
				}
				if(angle >= 0)
				{
					team1257Robot.SetLeftRightMotorOutputs(-0.35, 0.35);
					Wait(0.001);
				}
				if(abs(angle) < 1.75)
				{
					//break;
					driveBack = true;
				}
					
				team1257LCD->Printf(DriverStationLCD::kUser_Line4, 1, "Angle: %f", angle);
				team1257LCD->UpdateLCD();
			}
			
			/*else
			{
				team1257Robot.SetLeftRightMotorOutputs(0, 0);
			}*/
		}
		if(driveBack)
		{
			if(basket.center_mass_x != 0 && basket.center_mass_x <= 320) // We have valid data!
			{
				team1257LCD->Printf(DriverStationLCD::kUser_Line6, 1, "Driving back!");
				team1257LCD->UpdateLCD();
				if(basket.boundingRect.height < 32) // We are too far
				{
					team1257Robot.SetLeftRightMotorOutputs(-0.3, -0.3);
				}
				else if(basket.boundingRect.height > 33) // We are too close
				{
					team1257Robot.SetLeftRightMotorOutputs(0.3, 0.3);
				}
				else
				{
					break;
				}
			}
		}
		
		team1257LCD->Printf(DriverStationLCD::kUser_Line5, 1, "Width: %i", basket.boundingRect.height);
		team1257LCD->UpdateLCD();
	   
		// Clean up memory
		delete reports;
		delete filteredImage;
		delete convexHullImage;
		delete bigObjectsImage;
		delete thresholdImage;
		delete image;
		
		if(drivepad.GetRawButton(3) || controlpad.GetRawButton(3))
		{
			team1257LCD->Printf(DriverStationLCD::kUser_Line2, 1, "Aim n' Fire terminated");
			team1257LCD->UpdateLCD();
			break;
		}
	}
	
	team1257LCD->Printf(DriverStationLCD::kUser_Line2, 1, "Aim n' Fire terminated");
	team1257LCD->UpdateLCD();

I can’t comment on the specifics of how to make your algorithm work (2702’s is in a similar state of “nearly works”), but one way to test would be to save a bunch of JPGs from the camera, and then you can use them as test images or debug them intensively. We always found the most difficult part of testing the camera was reproducing what went wrong, and only this year at our last competition figured out how to get a good debugging going on.

Basically, you do something like this:
if(stick1.GetRawButton(1))
{
image->Write(“savedimage.jpg”);
}
To save your images. When the robot starts acting up, save the images to the cRio’s internal storage with the button and retrieve them by FTP. Then, you can take a look in image-editing software to figure out the thresholds you need.

Then, modify the code so that instead of doing


ColorImage* image = new RGBImage;
camera.GetImage(image); 

You do


ColorImage* image = new RGBImage("savedimage.jpg");

This way, your code will run on the saved test image and you’ll be able to deduce precisely what went wrong.

You may want to try using the HSL color space rather than RGB. We’ve had relatively good luck with it.

Also loosening the particle filter a little bit (on the low end) may help.

I would also suggest doing some null checking for each of the different image pointers you get while performing your threshold, filtering, etc. especially if your getting vision task is crashing/throwing errors.

In addition to the null checking, I would recommend checking to ensure the height and width of the original camera image are non-zero, as this does occasionally happen for some reason, and will cause the vision to stop working.

Lastly, you say you look for a width of 33px to determine when your 12’ away? Have you considered using height instead? By using width, your dependent on your horizontal viewing angle; if you use height instead, your distance equation should work from almost any angle (as long as your camera isn’t moving up and down).

I’ll 2nd DJ’s advice – great advice DJScribbles! All of these items are necessary. In a realtime system, the world is never kind. Odd things do happen and null pointers and failures do occur occasionally. As programmers, we like to program for perfect systems as it makes our jobs easier, but alas this isn’t reality. By carefully taking each step, you’ll likely find that things work much more reliably.

bob