Camera Tracking Code

This is currently our fifth year competing in an FRC regional, and this year we decided to attempt camera tracking again. We haven’t been successful with this in the past, but we decided to revisit it this year. This code is supposed to see the low, medium, or high goal, and center the robot according to the center mass x. So far we were able to get the robot to see the high goal using the light ring, and I was able to get the Jaguar motors to move according to the center mass x.

The two issues:

  • The tracking only works in the dark
  • The center mass x barely changes even if the camera can see the reflective tape

Anyone can help?

package edu.wpi.first.wpilibj.templates;

import edu.wpi.first.wpilibj.Joystick;
//import edu.wpi.first.wpilibj.SimpleRobot;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.image.BinaryImage;
import edu.wpi.first.wpilibj.image.ColorImage;
import edu.wpi.first.wpilibj.image.CriteriaCollection;
import edu.wpi.first.wpilibj.image.NIVision.MeasurementType;
import edu.wpi.first.wpilibj.image.NIVisionException;
import edu.wpi.first.wpilibj.image.ParticleAnalysisReport;
import edu.wpi.first.wpilibj.image.RGBImage;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import edu.wpi.first.wpilibj.Relay;
import edu.wpi.first.wpilibj.DriverStationLCD;
import edu.wpi.first.wpilibj.Jaguar;
import edu.wpi.first.wpilibj.IterativeRobot;

 * Sample program to use NIVision to find rectangles in the scene that are
 * illuminated by a red ring light (similar to the model from FIRSTChoice). The
 * camera sensitivity is set very low so as to only show light sources and
 * remove any distracting parts of the image.
 * The CriteriaCollection is the set of criteria that is used to filter the set
 * of rectangles that are detected. In this example we're looking for rectangles
 * with a minimum width of 30 pixels and maximum of 400 pixels. Similar for
 * height (see the addCriteria() methods below.
 * The algorithm first does a color threshold operation that only takes objects
 * in the scene that have a significant red color component. Then removes small
 * objects that might be caused by red reflection scattered from other parts of
 * the scene. Then a convex hull operation fills all the rectangle outlines
 * (even the partially occluded ones). Finally a particle filter looks for all
 * the shapes that meet the requirements specified in the criteria collection.
 * Look in the VisionImages directory inside the project that is created for the
 * sample images as well as the NI Vision Assistant file that contains the
 * vision command chain (open it with the Vision Assistant)
public class CameraTest extends IterativeRobot {

    AxisCamera camera;          // the axis camera object (connected to the switch)
    CriteriaCollection cc;      // the criteria for doing the particle filter operation
    //Joystick joystick = new Joystick(1);
    Joystick joystick;
    Joystick gamepad;
    Relay spikeRelay;
    DriverStationLCD b_LCD;
    Drive robotDrive;
    Jaguar jagTL;
    Jaguar jagTR;
    Jaguar jagBL;
    Jaguar jagBR;

    public void robotInit() {
        camera = AxisCamera.getInstance("");  // get an instance ofthe camera
        cc = new CriteriaCollection();      // create the criteria for the particle filter
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false);
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false);
        spikeRelay = new Relay(5);
        b_LCD = DriverStationLCD.getInstance();
        jagTL = new Jaguar(3);
        jagTR = new Jaguar(1);
        jagBL = new Jaguar(4);
        jagBR = new Jaguar(2);
        joystick = new Joystick(1);
        gamepad = new Joystick(2);
        robotDrive = new Drive(jagTR, jagBR, jagTL, jagBL);

    public void autonomousPeriodic() {

     * This function is called once each time the robot enters operator control.
    public void teleopPeriodic() {

        if (joystick.getRawButton(1)) {
            try {
                 * Do the image capture with the camera and apply the algorithm
                 * described above. This sample will either get images from the
                 * camera or from an image file stored in the top level
                 * directory in the flash memory on the cRIO. The file name in
                 * this case is "10ft2.jpg"
                ColorImage image = camera.getImage();         // comment if using stored images
//                ColorImage image;                           // next 2 lines read image from flash on cRIO
//                image =  new RGBImage("/10ft2.jpg");
                //BinaryImage thresholdImage = image.thresholdRGB(25, 255, 0, 45, 0, 47);   // keep only red objects
                BinaryImage thresholdImage = image.thresholdRGB(0, 45, 25, 225, 0, 45); //Michelle changed for green
                BinaryImage bigObjectsImage = thresholdImage.removeSmallObjects(false, 2);  // remove small artifacts
                BinaryImage convexHullImage = bigObjectsImage.convexHull(false);          // fill in occluded rectangles
                BinaryImage filteredImage = convexHullImage.particleFilter(cc);           // find filled in rectangles

                ParticleAnalysisReport] reports = filteredImage.getOrderedParticleAnalysisReports();  // get list of results
                for (int i = 0; i < reports.length; i++) {                                // print results
                    ParticleAnalysisReport r = reports*;
                    //System.out.println("Particle: " + i + ":  Center of mass x: " + r.center_mass_x);
                    //int centerMass = r.center_mass_x;
                    b_LCD.println(DriverStationLCD.Line.kMain6, 1, "Particle: " + i);
                    b_LCD.println(DriverStationLCD.Line.kUser5, 1, "Centermass x: " + r.center_mass_x);

                    if (r.center_mass_x < 200) {
                    } else {

                System.out.println(filteredImage.getNumberParticles() + "  " + Timer.getFPGATimestamp());

                SmartDashboard.putInt("Number of Particles: ", filteredImage.getNumberParticles());
                SmartDashboard.putInt("Height:  ", filteredImage.getHeight());
                SmartDashboard.putInt("Width:  ", filteredImage.getWidth());

                 * all images in Java must be freed after they are used since
                 * they are allocated out of C data structures. Not calling
                 * free() will cause the memory to accumulate over each pass of
                 * this loop.

            } catch (AxisCameraException ex) {        // this is needed if the camera.getImage() is called
            } catch (NIVisionException ex) {
        } else {


    public void testPeriodic() {


we are also having trouble with camera. it keeps giving us an error which tells us that the crio has ran out of memory, is there a way to free the memory and how?

If you are seeing an out of memory error, this usually means that you are creating more images than you are freeing. At the end of the image processing loop, after you’ve done all the filters and math that you think that you will need, you MUST call the free() method on ALL of the images you’ve created.

Keep in mind that the images are stored in C data structures, so you must explicitly call free() on them in order for them to be discarded. Like the next poster said, make sure that you do that or you WILL run out of memory on the cRIO.