I have been trying to get the camera to work with the cRIO. We set the camera up as per the directions and hooked it into the cRIO via the crossover cable. I compiled their default camera code and uploaded it to see if it was working, but the terminal just prints “Unable to find color” (or something along those lines). No matter how close or far away I hold the green cloth it won’t see it.
I heard that there are still bugs in the camera color recognition code…is that true? Or are we having a different problem?
I’ve been having the same problem here. I managed to get the LabView code working with the camera (including tracking) but the vision API’s written in C/C++ don’t seem to be working correctly. I’m going to dig into the libs further and see if I can generate my own wrapper class for the vision API. I really don’t want to use LabView. Give me gcc and vi and I’m happy. There’s just something about GUIs that get me.
I set up the camera using the LabView vision application and determined the hue, saturation and luminance with fixed color correction and exposure for the red piece of material supplied with the kit of parts. Using those values, I created the min/max structures for each property of the color I wanted to track as follows:
(the color values are not what I was using. this is just an example of what was in the code)
if (StartCameraTask() == -1)
{
printf( “Failed to spawn camera task; Error code %s”, GetErrorText( GetLastError()) );
}
// this code is in the operator control loop…
if (FindColor(IMAQ_HSL, &hue, &sat, &lum, &par)
{
printf(“color found at x = %i, y = %i", par.center_mass_x_normalized, par.center_mass_y_normalized);
printf(“color as percent of image: %d", par.particleToImagePercent); }
}
Runs without errors but FindColor returns a value of 128. I find this odd.
I also tried setting min/max values for all to 1 and 250 and it still couln’t find anything. I think this should have found everything in the room no?
I heard that there are still bugs in the camera color recognition code…is that true?
Yes. See my thread here for temporary bugfixes for WPILib. These are completely unofficial and have only been tested on our robot.
I also tried setting min/max values for all to 1 and 250 and it still couln’t find anything. I think this should have found everything in the room no?
This is because the values from the camera run from 0 to 255, so you wouldn’t have selected everything. The nature of the bugs means that FindColor is likely finding a single pixel or single small group of pixels before it finds the main ‘big’ particle, and returning that.
And here’s a picture to demonstrate the way it is broken.
The picture is an example of what the particle analysis actually uses: a ‘thresholded’ image where only the pixels that match your criteria are selected. The little dots are things like bright lights or pink/green dots in the background that inadvertently get selected. They are inevitable. The problem is that FindColor, as written, ALWAYS returns analysis on the first dot it finds.
Yes, i realize that but, in a real-world situation, you’re not going to get the full range of values. To say “everything” was an exaggeration to explain the concept. But on the side of practical application, in a low-contrast setting it would have included practically “everything”.
You’re right, it would very nearly select every pixel and if FindColor worked properly it should totally return a particle with size nearly equal to your camera’s pixel count. But so long as there was a way for a single small selected particle to exist (the middle of a lighting array or something), then you’d potentially have bad FindColor results if it decided to return that small pixel rather than the big particle.