Program Axis Camera in c++

Hi, lately i have been doing a lot of studying c++. I’ve download the wpi code and looked over it pretty well, but i’m just not exactly sure how to program the camera. We haven’t received our crio yet and I want to be ready to program it when we get it. Could someone explain to me how I would use the camera to track a specific color of light and then align that light in the center view of the camera? Do you have well commented code you could show me? Thanks for all your help.

Camera documentation starts at page 41 of this document:
http://first.wpi.edu/Images/CMS/First/C_Programming_Guide_for_FRC.pdf

Yeah, I’ve read this, but I don’t understand how this will take an image and align it in the center view of the camera. Could anyone highlight some code and explane it. thanks

Note that I’ve never done this, I’m just giving somewhat-informed advice.

Ok, if you turn to page 53, there’s a bit more detail about the processing step (I’m guessing you’ve read that too).
Here’s the code

TrackingThreshold tdata = GetTrackingData(BLUE, FLUORESCENT); 

This line declares a TrackingThreshhold structure. Based on the comments on the PDF and the way it is used in the code, this appears to be a block of settings that is later used by the FindColor function to do its image analysis. In this case, it appears that they are filling it with pre-set defaults for a generic blue color.

ParticleAnalysisReport par; 

This declares a ParticleAnalysisReport structure. This structure is filled with many useful image-analysis results by the FindColor function.

if (FindColor(IMAQ_HSL, &tdata.hue, &tdata.saturation, &tdata.luminance, &par) 

This is where the magic happens. The FindColor function uses the Hue, Saturation, and Luminance (another way of specifying a color, it is like RGB) from the TrackingThreshhold object to tell it what to look for, then returns its results in the ParticleAnalysisReport structure. Note that they use an if statement here. This is because FindColor returns 1 on success, 0 on failure. I haven’t found definitions for what it considers success of failure, but I’d guess (just a guess) that it returns failure if it can’t find a good enough blob.

{ 
  printf(“color found at x = %i, y = %i", par.center_mass_x_normalized, par.center_mass_y_normalized); 
  printf(“color as percent of image: %d", par.particleToImagePercent); 
}

This is where you would act on the results. The center_mass_x_normalized variable tells you where the best match (the biggest blue blob, in this case) is on the x axis. -1.0 means it is on the edge of the camera’s vision to the left, and 1.0 means it is on the edge of the camera’s vision to the right.

So if I understand your question, you would want your robot to turn left if you see a value less than 0.0, turn right if you see a value more than 0.0, and remain still if the camera value is at 0.0. It would be a good idea to use a PID loop to control this, however, because you’d get big oscillations if you had a simple algorithm like that.

I haven’t examined it yet, but Team 1114 released C++ beta test code that has basic color tracking autonomous functionality implemented on their 2007 robot. You may elect to see how they did it:

Click here.

From their site:

“The C++ code developed for teleoperated control of the robot. It includes examples of an IterativeRobot with one-joystick drive and simple arm control, as well as some convenience classes we wrote for double solenoids and the Logitech Dual Action Gamepad. Also included is an autonomous mode that tracks the green light and scores a tube on a spider leg, as well as a class for doing PID calculations 1114-style.”

(Yes, I know, I linked to a certain intermediate website instead of providing the direct link. Tough cookies! :slight_smile: )

Thanks guys for the help. Bongle, that is exactly what i’m looking for. Thanks alot.