Issue With Vision Tracking

Hi all,

In preparation for off-season events, our team is trying to implement vision tracking into our bot using GRIP. The pipeline, which as a proof of concept is designed to track something green, works fine in GRIP. The final block, a filterContours module, only sees the object we want, which is good. Unfortuanately, when we generate a class file and plug it into a visionthread on the rio, it doesn’t work. Thanks to some print statements, I’m pretty sure the thread is starting normally, but the filterContoursOutput().isEmpty() method returns true, even when the object is in frame, so it doesn’t store the location of the green item. I’ll include our code below, and If anyone could help figure out what’s wrong, I’d very much appreciate it. If there are any clarifying questions I can answer, feel free to ask!

Robot.Java:

/----------------------------------------------------------------------------/
/* Copyright © 2017-2018 FIRST. All Rights Reserved. /
/
Open Source Software - may be modified and shared by FRC teams. The code /
/
must be accompanied by the FIRST BSD license file in the root directory of /
/
the project. /
/
----------------------------------------------------------------------------*/

package org.usfirst.frc.team2523.robot;

import org.opencv.core.Rect;
import org.opencv.imgproc.Imgproc;

import com.ctre.phoenix.motorcontrol.can.WPI_TalonSRX;

import edu.wpi.cscore.UsbCamera;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.IterativeRobot;
import edu.wpi.first.wpilibj.RobotDrive;
import edu.wpi.first.wpilibj.Servo;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.smartdashboard.SendableChooser;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import edu.wpi.first.wpilibj.vision.VisionRunner;
import edu.wpi.first.wpilibj.vision.VisionThread;

/**

  • The VM is configured to automatically run this class, and to call the

  • functions corresponding to each mode, as described in the IterativeRobot

  • documentation. If you change the name of this class or the package after

  • creating this project, you must also update the build.properties file in the

  • project.
    /
    public class Robot extends IterativeRobot {
    private static final String kDefaultAuto = “Default”;
    private static final String kCustomAuto = “My Auto”;
    private String m_autoSelected;
    private SendableChooser<String> m_chooser = new SendableChooser<>();
    private VisionThread vt;
    private double centerX = -1;
    private final Object imgLock = new Object();
    WPI_TalonSRX leftF = new WPI_TalonSRX(8);
    WPI_TalonSRX rightF = new WPI_TalonSRX(5);
    public WPI_TalonSRX leftR = new WPI_TalonSRX(3);
    public WPI_TalonSRX rightR = new WPI_TalonSRX(9);
    Servo s = new Servo(8);
    /
    *

    • This function is run when the robot is first started up and should be

    • used for any initialization code.
      */
      @Override
      public void robotInit() {
      UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
      camera.setResolution(640, 480);

      vt = new VisionThread(camera, new MyVisionPipeline(), new VisionRunner.Listener<MyVisionPipeline>() {

       public void copyPipelineOutputs(MyVisionPipeline pipeline) {
       	//System.out.println("Got into pipeline Method");
       	
       	if(!pipeline.filterContoursOutput().isEmpty()) {
       		System.out.println("Got into if");
       		Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
       		synchronized (imgLock) {
       			centerX = r.x + (r.width / 2);
       		}
       	} else {
       		centerX = -1;
       	}
       }
      

      });

      vt.start();
      m_chooser.addDefault(“Default Auto”, kDefaultAuto);
      m_chooser.addObject(“My Auto”, kCustomAuto);
      SmartDashboard.putData(“Auto choices”, m_chooser);
      }

    /**

    • This autonomous (along with the chooser code above) shows how to select
    • between different autonomous modes using the dashboard. The sendable
    • chooser code works with the Java SmartDashboard. If you prefer the
    • LabVIEW Dashboard, remove all of the chooser code and uncomment the
    • getString line to get the auto name from the text box below the Gyro
    • <p>You can add additional auto modes by adding additional comparisons to
    • the switch structure below with additional strings. If using the
    • SendableChooser make sure to add them to the chooser code above as well.
      */
      @Override
      public void autonomousInit() {
      m_autoSelected = m_chooser.getSelected();
      // autoSelected = SmartDashboard.getString(“Auto Selector”,
      // defaultAuto);
      System.out.println("Auto selected: " + m_autoSelected);
      s.set(0);
      }

    /**

    • This function is called periodically during autonomous.
      */
      @Override
      public void autonomousPeriodic() {

       	// Put default auto code here
       	System.out.println("centerX: " + centerX );
       	if(centerX&lt;0) {
       		leftF.set(0);
       		leftR.set(0);
       		rightF.set(0);
       		rightR.set(0);
       		System.out.println("No Blob Detected");
       	} else if(centerX&lt;300) {
       		leftF.set(-.2);
       		leftR.set(-.2);
       		rightF.set(.2);
       		rightR.set(.2);
       	}
       	else if(centerX&gt;340) {
       		leftF.set(.2);
       		leftR.set(.2);
       		rightF.set(-.2);
       		rightR.set(-.2);
       		
       	} else {
       		//FIRE!
       		s.set(1);
       		Timer.delay(.5);
       		s.set(0);
       		
       	}
      

      }

    /**

    • This function is called periodically during operator control.
      */
      @Override
      public void teleopPeriodic() {
      }

    /**

    • This function is called periodically during test mode.
      */
      @Override
      public void testPeriodic() {
      }
      }

GRIP Pipeline:

package org.usfirst.frc.team2523.robot;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.HashMap;

import edu.wpi.first.wpilibj.vision.VisionPipeline;

import org.opencv.core.;
import org.opencv.core.Core.
;
import org.opencv.features2d.FeatureDetector;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.;
import org.opencv.objdetect.
;

/**

  • MyVisionPipeline class.

  • <p>An OpenCV pipeline generated by GRIP.

  • @author GRIP
    */
    public class MyVisionPipeline implements VisionPipeline {

    //Outputs
    private Mat resizeImageOutput = new Mat();
    private Mat hslThresholdOutput = new Mat();
    private Mat cvErodeOutput = new Mat();
    private ArrayList<MatOfPoint> findContoursOutput = new ArrayList<MatOfPoint>();
    private ArrayList<MatOfPoint> filterContoursOutput = new ArrayList<MatOfPoint>();

    static {
    System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
    }

    /**

    • This is the primary method that runs the entire pipeline and updates the outputs.
      */
      @Override public void process(Mat source0) {
      // Step Resize_Image0:
      Mat resizeImageInput = source0;
      double resizeImageWidth = 640.0;
      double resizeImageHeight = 480.0;
      int resizeImageInterpolation = Imgproc.INTER_CUBIC;
      resizeImage(resizeImageInput, resizeImageWidth, resizeImageHeight, resizeImageInterpolation, resizeImageOutput);

      // Step HSL_Threshold0:
      Mat hslThresholdInput = resizeImageOutput;
      double] hslThresholdHue = {79.31654676258992, 93.63636363636363};
      double] hslThresholdSaturation = {107.77877697841726, 255.0};
      double] hslThresholdLuminance = {0.0, 98.3080808080808};
      hslThreshold(hslThresholdInput, hslThresholdHue, hslThresholdSaturation, hslThresholdLuminance, hslThresholdOutput);

      // Step CV_erode0:
      Mat cvErodeSrc = hslThresholdOutput;
      Mat cvErodeKernel = new Mat();
      Point cvErodeAnchor = new Point(-1, -1);
      double cvErodeIterations = 7.0;
      int cvErodeBordertype = Core.BORDER_CONSTANT;
      Scalar cvErodeBordervalue = new Scalar(-1);
      cvErode(cvErodeSrc, cvErodeKernel, cvErodeAnchor, cvErodeIterations, cvErodeBordertype, cvErodeBordervalue, cvErodeOutput);

      // Step Find_Contours0:
      Mat findContoursInput = cvErodeOutput;
      boolean findContoursExternalOnly = false;
      findContours(findContoursInput, findContoursExternalOnly, findContoursOutput);

      // Step Filter_Contours0:
      ArrayList<MatOfPoint> filterContoursContours = findContoursOutput;
      double filterContoursMinArea = 15.0;
      double filterContoursMinPerimeter = 0.0;
      double filterContoursMinWidth = 0.0;
      double filterContoursMaxWidth = 1000.0;
      double filterContoursMinHeight = 0.0;
      double filterContoursMaxHeight = 1000.0;
      double] filterContoursSolidity = {0, 100};
      double filterContoursMaxVertices = 1000000.0;
      double filterContoursMinVertices = 0.0;
      double filterContoursMinRatio = 0.0;
      double filterContoursMaxRatio = 1000.0;
      filterContours(filterContoursContours, filterContoursMinArea, filterContoursMinPerimeter, filterContoursMinWidth, filterContoursMaxWidth, filterContoursMinHeight, filterContoursMaxHeight, filterContoursSolidity, filterContoursMaxVertices, filterContoursMinVertices, filterContoursMinRatio, filterContoursMaxRatio, filterContoursOutput);

    }

    /**

    • This method is a generated getter for the output of a Resize_Image.
    • @return Mat output from Resize_Image.
      */
      public Mat resizeImageOutput() {
      return resizeImageOutput;
      }

    /**

    • This method is a generated getter for the output of a HSL_Threshold.
    • @return Mat output from HSL_Threshold.
      */
      public Mat hslThresholdOutput() {
      return hslThresholdOutput;
      }

    /**

    • This method is a generated getter for the output of a CV_erode.
    • @return Mat output from CV_erode.
      */
      public Mat cvErodeOutput() {
      return cvErodeOutput;
      }

    /**

    • This method is a generated getter for the output of a Find_Contours.
    • @return ArrayList<MatOfPoint> output from Find_Contours.
      */
      public ArrayList<MatOfPoint> findContoursOutput() {
      return findContoursOutput;
      }

    /**

    • This method is a generated getter for the output of a Filter_Contours.
    • @return ArrayList<MatOfPoint> output from Filter_Contours.
      */
      public ArrayList<MatOfPoint> filterContoursOutput() {
      return filterContoursOutput;
      }

    /**

    • Scales and image to an exact size.
    • @param input The image on which to perform the Resize.
    • @param width The width of the output in pixels.
    • @param height The height of the output in pixels.
    • @param interpolation The type of interpolation.
    • @param output The image in which to store the output.
      */
      private void resizeImage(Mat input, double width, double height,
      int interpolation, Mat output) {
      Imgproc.resize(input, output, new Size(width, height), 0.0, 0.0, interpolation);
      }

    /**

    • Segment an image based on hue, saturation, and luminance ranges.
    • @param input The image on which to perform the HSL threshold.
    • @param hue The min and max hue
    • @param sat The min and max saturation
    • @param lum The min and max luminance
    • @param output The image in which to store the output.
      */
      private void hslThreshold(Mat input, double] hue, double] sat, double] lum,
      Mat out) {
      Imgproc.cvtColor(input, out, Imgproc.COLOR_BGR2HLS);
      Core.inRange(out, new Scalar(hue[0], lum[0], sat[0]),
      new Scalar(hue[1], lum[1], sat[1]), out);
      }

    /**

    • Expands area of lower value in an image.
    • @param src the Image to erode.
    • @param kernel the kernel for erosion.
    • @param anchor the center of the kernel.
    • @param iterations the number of times to perform the erosion.
    • @param borderType pixel extrapolation method.
    • @param borderValue value to be used for a constant border.
    • @param dst Output Image.
      */
      private void cvErode(Mat src, Mat kernel, Point anchor, double iterations,
      int borderType, Scalar borderValue, Mat dst) {
      if (kernel == null) {
      kernel = new Mat();
      }
      if (anchor == null) {
      anchor = new Point(-1,-1);
      }
      if (borderValue == null) {
      borderValue = new Scalar(-1);
      }
      Imgproc.erode(src, dst, kernel, anchor, (int)iterations, borderType, borderValue);
      }

    /**

    • Sets the values of pixels in a binary image to their distance to the nearest black pixel.
    • @param input The image on which to perform the Distance Transform.
    • @param type The Transform.
    • @param maskSize the size of the mask.
    • @param output The image in which to store the output.
      */
      private void findContours(Mat input, boolean externalOnly,
      List<MatOfPoint> contours) {
      Mat hierarchy = new Mat();
      contours.clear();
      int mode;
      if (externalOnly) {
      mode = Imgproc.RETR_EXTERNAL;
      }
      else {
      mode = Imgproc.RETR_LIST;
      }
      int method = Imgproc.CHAIN_APPROX_SIMPLE;
      Imgproc.findContours(input, contours, hierarchy, mode, method);
      }

    /**

    • Filters out contours that do not meet certain criteria.
    • @param inputContours is the input list of contours
    • @param output is the the output list of contours
    • @param minArea is the minimum area of a contour that will be kept
    • @param minPerimeter is the minimum perimeter of a contour that will be kept
    • @param minWidth minimum width of a contour
    • @param maxWidth maximum width
    • @param minHeight minimum height
    • @param maxHeight maximimum height
    • @param Solidity the minimum and maximum solidity of a contour
    • @param minVertexCount minimum vertex Count of the contours
    • @param maxVertexCount maximum vertex Count
    • @param minRatio minimum ratio of width to height
    • @param maxRatio maximum ratio of width to height
      */
      private void filterContours(List<MatOfPoint> inputContours, double minArea,
      double minPerimeter, double minWidth, double maxWidth, double minHeight, double
      maxHeight, double] solidity, double maxVertexCount, double minVertexCount, double
      minRatio, double maxRatio, List<MatOfPoint> output) {
      final MatOfInt hull = new MatOfInt();
      output.clear();
      //operation
      for (int i = 0; i < inputContours.size(); i++) {
      final MatOfPoint contour = inputContours.get(i);
      final Rect bb = Imgproc.boundingRect(contour);
      if (bb.width < minWidth || bb.width > maxWidth) continue;
      if (bb.height < minHeight || bb.height > maxHeight) continue;
      final double area = Imgproc.contourArea(contour);
      if (area < minArea) continue;
      if (Imgproc.arcLength(new MatOfPoint2f(contour.toArray()), true) < minPerimeter) continue;
      Imgproc.convexHull(contour, hull);
      MatOfPoint mopHull = new MatOfPoint();
      mopHull.create((int) hull.size().height, 1, CvType.CV_32SC2);
      for (int j = 0; j < hull.size().height; j++) {
      int index = (int)hull.get(j, 0)[0];
      double] point = new double] { contour.get(index, 0)[0], contour.get(index, 0)[1]};
      mopHull.put(j, 0, point);
      }
      final double solid = 100 * area / Imgproc.contourArea(mopHull);
      if (solid < solidity[0] || solid > solidity[1]) continue;
      if (contour.rows() < minVertexCount || contour.rows() > maxVertexCount) continue;
      final double ratio = bb.width / (double)bb.height;
      if (ratio < minRatio || ratio > maxRatio) continue;
      output.add(contour);
      }
      }

}

How does copyPipelineOutputs get called.?

I don’t actually know. The way that section is written is from a suggestion in this post:

https://www.chiefdelphi.com/forums/showthread.php?p=1636748

Which I used because writing it the other way had a bunch of compile time errors and red underlines.That being said, That method does get called, because a print statement in that method did print to the console periodically.

Any reason why you are processing your images on the rio? I’ve never done it directly on the rio before, but I’ve always gotten data from network tables. If you push you contours to network tables you may be able to grab them from there.

I think I probably just figured it out, missed a bit a code and forgot to transfer centerX across threads with the Imgloc object. I’ll test when I get to the robot.

Also, how did you tune your camera parameters and settings? Did you have GRIP pull the HTTP stream from the RoboRIO? Or did you develop with the camera plugged into the Windows machine? I would recommend the former to ensure that you don’t have different camera settings. Camera settings are not at all consistent between Windows and Linux even with the same USB camera.

Unrelated but two tips for future:

  1. CD offers
 tags that makes for nicely formatted code:


if(x == 5) {
buzz();
fizz();
}



2) Splitting the code into multiple classes (.java) files can help with readability. Especially, in this case, moving all vision-processing to its own class.


Hope your solution works well! Post again if you're still stuck :D

It looks like the VisionRunner class calls this in the runOnce() function

We are in year two of vision tracking, so far from experts however in our experience this is the highest likelihood of having a pipeline in GRIP that works but doesn’t work on the RIO.

The CameraServer will likely be working with the camera stream at different exposure settings thus a pipeline tuned for things like HSV will not be the same on the Rio.