Camera not working

I have been trying to get the camera to work with the cRIO. We set the camera up as per the directions and hooked it into the cRIO via the crossover cable. I compiled their default camera code and uploaded it to see if it was working, but the terminal just prints “Unable to find color” (or something along those lines). No matter how close or far away I hold the green cloth it won’t see it.

I heard that there are still bugs in the camera color recognition code…is that true? Or are we having a different problem?

I’ve been having the same problem here. I managed to get the LabView code working with the camera (including tracking) but the vision API’s written in C/C++ don’t seem to be working correctly. I’m going to dig into the libs further and see if I can generate my own wrapper class for the vision API. I really don’t want to use LabView. Give me gcc and vi and I’m happy. There’s just something about GUIs that get me.

did you set the color in the code to red, if i remember right the default color that it is searching for is red.

Also did you build the camera pan tilt mount? so that you can use it with that.

what does line 114 say. This sets the mode for the demo. By default it is STOPLIGHT which does
"
/* this simple test move forward for green

				 * and move away for red 

				 */

"

You might want to try GOFORWARD
"
/* this simple test will drive forward if COLOR is detected

				 * drive will last until autonomous terminates

				 */

"
The color is GREEN in that example

Also check SNAPSHOT to see what the image looks like. If it is too dim the image goes all grey and will not pick the color up.

I set up the camera using the LabView vision application and determined the hue, saturation and luminance with fixed color correction and exposure for the red piece of material supplied with the kit of parts. Using those values, I created the min/max structures for each property of the color I wanted to track as follows:
(the color values are not what I was using. this is just an example of what was in the code)

Range hue, sat, lum;
hue.minValue = 140; // Hue
hue.maxValue = 155;
sat.minValue = 100; // Saturation
sat.maxValue = 255;
lum.minValue = 40; // Luminance
lum.maxValue = 255;

ParticleAnalysisReport par;

if (StartCameraTask() == -1)
{
printf( “Failed to spawn camera task; Error code %s”, GetErrorText( GetLastError()) );
}

// this code is in the operator control loop…
if (FindColor(IMAQ_HSL, &hue, &sat, &lum, &par)
{
printf(“color found at x = %i, y = %i", par.center_mass_x_normalized, par.center_mass_y_normalized);
printf(“color as percent of image: %d", par.particleToImagePercent); }
}

Runs without errors but FindColor returns a value of 128. I find this odd.

I also tried setting min/max values for all to 1 and 250 and it still couln’t find anything. I think this should have found everything in the room no?

I heard that there are still bugs in the camera color recognition code…is that true?

Yes. See my thread here for temporary bugfixes for WPILib. These are completely unofficial and have only been tested on our robot.

I also tried setting min/max values for all to 1 and 250 and it still couln’t find anything. I think this should have found everything in the room no?

This is because the values from the camera run from 0 to 255, so you wouldn’t have selected everything. The nature of the bugs means that FindColor is likely finding a single pixel or single small group of pixels before it finds the main ‘big’ particle, and returning that.

It does tha same thing when I select realistic values grabbed from VisionAssist.

We spent the last 3 days trying to figure out why our realistic values didn’t work, just like you. The bugs we found are why.

FindColor goes like this:

  1. Selects all pixels in the latest camera image that match your color [this part is fine]
  2. Treats groups of selected pixels as particles [this part is fine]
  3. Takes the very first particles, regardless of size, and analyzes it [this is a bug because of bugs in GetLargestParticle and InArea]
  4. Returns the analysis of this first group, rather than an analysis of the biggest set found.

That is why FindColor is broken, even for realistic colors.

And here’s a picture to demonstrate the way it is broken.

The picture is an example of what the particle analysis actually uses: a ‘thresholded’ image where only the pixels that match your criteria are selected. The little dots are things like bright lights or pink/green dots in the background that inadvertently get selected. They are inevitable. The problem is that FindColor, as written, ALWAYS returns analysis on the first dot it finds.

example.PNG


example.PNG

Yes, i realize that but, in a real-world situation, you’re not going to get the full range of values. To say “everything” was an exaggeration to explain the concept. But on the side of practical application, in a low-contrast setting it would have included practically “everything”.

You’re right, it would very nearly select every pixel and if FindColor worked properly it should totally return a particle with size nearly equal to your camera’s pixel count. But so long as there was a way for a single small selected particle to exist (the middle of a lighting array or something), then you’d potentially have bad FindColor results if it decided to return that small pixel rather than the big particle.