Getting output with GRIP generated code and USB camera

This is our third year as an FRC team and we decided to use vision processing for delivering gears in autonomous mode. Once I succeeded in getting the GRIP UI to recognize reflective tape on a laptop with a USB camera and saw a button that said “Generate Code”, I thought that the implementation of the vision system would be simple. That was two weeks ago.:frowning: :confused: (they don’t seem to have a crying emoji)

My main difficulty has been with the various data types that these libraries use. I need to convert a UsbCamera object to a Mat object so that I can call process() with a USB camera image as the argument.

I added a few lines of code to the GripPipeline that should publish the centerX to Network Tables, but when I run the code (and it does compile), my Network Tables do not contain anything from GRIP; they just have a bunch of information about my USB camera. Here’s the GRIP generated code that I’m using:


package org.usfirst.frc5490.TestCode2017;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.HashMap;

import edu.wpi.first.wpilibj.networktables.NetworkTable;
import edu.wpi.first.wpilibj.vision.VisionPipeline;

import org.opencv.core.*;
import org.opencv.core.Core.*;
import org.opencv.features2d.FeatureDetector;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.*;
import org.opencv.objdetect.*;

/**
* GripPipeline class.
*
* <p>An OpenCV pipeline generated by GRIP.
*
* @author GRIP
*/
public class GripPipeline implements VisionPipeline {

	private double centerX = 0;
	
	//Outputs
	private Mat blurOutput = new Mat();
	private Mat hslThresholdOutput = new Mat();
	private ArrayList<MatOfPoint> findContoursOutput = new ArrayList<MatOfPoint>();
	private ArrayList<MatOfPoint> filterContoursOutput = new ArrayList<MatOfPoint>();

	static {
		System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
	}

	/**
	 * This is the primary method that runs the entire pipeline and updates the outputs.
	 */
	@Override	public void process(Mat source0) {
		// Step Blur0:
		Mat blurInput = source0;
		BlurType blurType = BlurType.get("Box Blur");
		double blurRadius = 13.513513513513505;
		blur(blurInput, blurType, blurRadius, blurOutput);

		// Step HSL_Threshold0:
		Mat hslThresholdInput = blurOutput;
		double] hslThresholdHue = {59.89208633093526, 80.67911714770797};
		double] hslThresholdSaturation = {52.74280575539568, 255.0};
		double] hslThresholdLuminance = {103.19244604316548, 255.0};
		hslThreshold(hslThresholdInput, hslThresholdHue, hslThresholdSaturation, hslThresholdLuminance, hslThresholdOutput);

		// Step Find_Contours0:
		Mat findContoursInput = hslThresholdOutput;
		boolean findContoursExternalOnly = false;
		findContours(findContoursInput, findContoursExternalOnly, findContoursOutput);

		// Step Filter_Contours0:
		ArrayList<MatOfPoint> filterContoursContours = findContoursOutput;
		double filterContoursMinArea = 50.0;
		double filterContoursMinPerimeter = 25.0;
		double filterContoursMinWidth = 25.0;
		double filterContoursMaxWidth = 1000.0;
		double filterContoursMinHeight = 25.0;
		double filterContoursMaxHeight = 1000.0;
		double] filterContoursSolidity = {0, 100};
		double filterContoursMaxVertices = 1000000.0;
		double filterContoursMinVertices = 0.0;
		double filterContoursMinRatio = 0.0;
		double filterContoursMaxRatio = 1000.0;
		filterContours(filterContoursContours, filterContoursMinArea, filterContoursMinPerimeter, filterContoursMinWidth, filterContoursMaxWidth, filterContoursMinHeight, filterContoursMaxHeight, filterContoursSolidity, filterContoursMaxVertices, filterContoursMinVertices, filterContoursMinRatio, filterContoursMaxRatio, filterContoursOutput);

		Rect r = Imgproc.boundingRect(filterContoursOutput().get(0));
		centerX = r.x + (r.width / 2);
		NetworkTable.getTable("Grip").putNumber("centerX", centerX);
	}

	/**
	 * This method is a generated getter for the output of a Blur.
	 * @return Mat output from Blur.
	 */
	public Mat blurOutput() {
		return blurOutput;
	}

	/**
	 * This method is a generated getter for the output of a HSL_Threshold.
	 * @return Mat output from HSL_Threshold.
	 */
	public Mat hslThresholdOutput() {
		return hslThresholdOutput;
	}

	/**
	 * This method is a generated getter for the output of a Find_Contours.
	 * @return ArrayList<MatOfPoint> output from Find_Contours.
	 */
	public ArrayList<MatOfPoint> findContoursOutput() {
		return findContoursOutput;
	}

	/**
	 * This method is a generated getter for the output of a Filter_Contours.
	 * @return ArrayList<MatOfPoint> output from Filter_Contours.
	 */
	public ArrayList<MatOfPoint> filterContoursOutput() {
		return filterContoursOutput;
	}


	/**
	 * An indication of which type of filter to use for a blur.
	 * Choices are BOX, GAUSSIAN, MEDIAN, and BILATERAL
	 */
	enum BlurType{
		BOX("Box Blur"), GAUSSIAN("Gaussian Blur"), MEDIAN("Median Filter"),
			BILATERAL("Bilateral Filter");

		private final String label;

		BlurType(String label) {
			this.label = label;
		}

		public static BlurType get(String type) {
			if (BILATERAL.label.equals(type)) {
				return BILATERAL;
			}
			else if (GAUSSIAN.label.equals(type)) {
			return GAUSSIAN;
			}
			else if (MEDIAN.label.equals(type)) {
				return MEDIAN;
			}
			else {
				return BOX;
			}
		}

		@Override
		public String toString() {
			return this.label;
		}
	}

	/**
	 * Softens an image using one of several filters.
	 * @param input The image on which to perform the blur.
	 * @param type The blurType to perform.
	 * @param doubleRadius The radius for the blur.
	 * @param output The image in which to store the output.
	 */
	private void blur(Mat input, BlurType type, double doubleRadius,
		Mat output) {
		int radius = (int)(doubleRadius + 0.5);
		int kernelSize;
		switch(type){
			case BOX:
				kernelSize = 2 * radius + 1;
				Imgproc.blur(input, output, new Size(kernelSize, kernelSize));
				break;
			case GAUSSIAN:
				kernelSize = 6 * radius + 1;
				Imgproc.GaussianBlur(input,output, new Size(kernelSize, kernelSize), radius);
				break;
			case MEDIAN:
				kernelSize = 2 * radius + 1;
				Imgproc.medianBlur(input, output, kernelSize);
				break;
			case BILATERAL:
				Imgproc.bilateralFilter(input, output, -1, radius, radius);
				break;
		}
	}

	/**
	 * Segment an image based on hue, saturation, and luminance ranges.
	 *
	 * @param input The image on which to perform the HSL threshold.
	 * @param hue The min and max hue
	 * @param sat The min and max saturation
	 * @param lum The min and max luminance
	 * @param output The image in which to store the output.
	 */
	private void hslThreshold(Mat input, double] hue, double] sat, double] lum,
		Mat out) {
		Imgproc.cvtColor(input, out, Imgproc.COLOR_BGR2HLS);
		Core.inRange(out, new Scalar(hue[0], lum[0], sat[0]),
			new Scalar(hue[1], lum[1], sat[1]), out);
	}

	/**
	 * Sets the values of pixels in a binary image to their distance to the nearest black pixel.
	 * @param input The image on which to perform the Distance Transform.
	 * @param type The Transform.
	 * @param maskSize the size of the mask.
	 * @param output The image in which to store the output.
	 */
	private void findContours(Mat input, boolean externalOnly,
		List<MatOfPoint> contours) {
		Mat hierarchy = new Mat();
		contours.clear();
		int mode;
		if (externalOnly) {
			mode = Imgproc.RETR_EXTERNAL;
		}
		else {
			mode = Imgproc.RETR_LIST;
		}
		int method = Imgproc.CHAIN_APPROX_SIMPLE;
		Imgproc.findContours(input, contours, hierarchy, mode, method);
	}


	/**
	 * Filters out contours that do not meet certain criteria.
	 * @param inputContours is the input list of contours
	 * @param output is the the output list of contours
	 * @param minArea is the minimum area of a contour that will be kept
	 * @param minPerimeter is the minimum perimeter of a contour that will be kept
	 * @param minWidth minimum width of a contour
	 * @param maxWidth maximum width
	 * @param minHeight minimum height
	 * @param maxHeight maximimum height
	 * @param Solidity the minimum and maximum solidity of a contour
	 * @param minVertexCount minimum vertex Count of the contours
	 * @param maxVertexCount maximum vertex Count
	 * @param minRatio minimum ratio of width to height
	 * @param maxRatio maximum ratio of width to height
	 */
	private void filterContours(List<MatOfPoint> inputContours, double minArea,
		double minPerimeter, double minWidth, double maxWidth, double minHeight, double
		maxHeight, double] solidity, double maxVertexCount, double minVertexCount, double
		minRatio, double maxRatio, List<MatOfPoint> output) {
		final MatOfInt hull = new MatOfInt();
		output.clear();
		//operation
		for (int i = 0; i < inputContours.size(); i++) {
			final MatOfPoint contour = inputContours.get(i);
			final Rect bb = Imgproc.boundingRect(contour);
			if (bb.width < minWidth || bb.width > maxWidth) continue;
			if (bb.height < minHeight || bb.height > maxHeight) continue;
			final double area = Imgproc.contourArea(contour);
			if (area < minArea) continue;
			if (Imgproc.arcLength(new MatOfPoint2f(contour.toArray()), true) < minPerimeter) continue;
			Imgproc.convexHull(contour, hull);
			MatOfPoint mopHull = new MatOfPoint();
			mopHull.create((int) hull.size().height, 1, CvType.CV_32SC2);
			for (int j = 0; j < hull.size().height; j++) {
				int index = (int)hull.get(j, 0)[0];
				double] point = new double] { contour.get(index, 0)[0], contour.get(index, 0)[1]};
				mopHull.put(j, 0, point);
			}
			final double solid = 100 * area / Imgproc.contourArea(mopHull);
			if (solid < solidity[0] || solid > solidity[1]) continue;
			if (contour.rows() < minVertexCount || contour.rows() > maxVertexCount)	continue;
			final double ratio = bb.width / (double)bb.height;
			if (ratio < minRatio || ratio > maxRatio) continue;
			output.add(contour);
		}
	}
}

Note that I have tried this code straight from GRIP (without changing anything first) and I still get the same result, which is no result at all. Since nothing is published to Network Tables, I believe that that process() is not being called correctly. I don’t know how to convert the UsbCamera object “camera”, to the Mat source type that is accepted by process(). Here is the code in my main Robot class that I am using to attempt to call process() and output the number of contours in the frame (ideally 1, as I perform each trial while holding up a single piece of reflective tape in front of the robot) to the console. I am not getting any values in the console either, just some stuff about my camera running at 30fps. Anyway, here it is:


public void autonomous() {
    	UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
        camera.setResolution(IMG_WIDTH, IMG_HEIGHT);
        CvSink cvSink = CameraServer.getInstance().getVideo();
    	Mat mat = new Mat();
    	if (cvSink.grabFrame(mat) == 0)
    	{
    		System.out.println(cvSink.getError());
    	}
    	else
    	{
        	GripPipeline grip = new GripPipeline();
        	while (true)
        	{
        		grip.process(mat);
        		System.out.println(grip.filterContoursOutput().size());
        	}
    	}
}

I have tried hundreds of variations for this code that I have found on various threads/tutorials and I have had no success. I am new to these libraries and I do not know anyone who knows the first thing about how to do this. I have read all the screensteps tutorials a dozen times and used the code they provide, but I do not get any results. I have tried code that uses VisionThread pipelines, though I cannot, with all my years of coding experience, make any sense of what they actually do, and I have been met with the same lack of results.

Any help with this, and especially, any code that would convert the data from my USB camera to a Mat source, would be greatly appreciated.

1 Like

Have you read the screensteps page?

I have read all the screensteps pages and I have tried the example code that they provide. When I use the code from that page, and I readout centerX to the console, it just prints 0.0. I do not know what the problem is. Here is my main class with the code from screensteps (teleop removed):


package org.usfirst.frc5490.TestCode2017;

import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.vision.VisionThread;
import edu.wpi.first.wpilibj.vision.VisionPipeline;
import edu.wpi.first.wpilibj.vision.VisionRunner;
import edu.wpi.cscore.UsbCamera;
import edu.wpi.cscore.CvSink;
import edu.wpi.cscore.CvSource;
import edu.wpi.cscore.MjpegServer;

import org.opencv.core.Mat;

import java.util.ArrayList;

import org.opencv.core.MatOfPoint;
import org.opencv.core.Rect;
import org.opencv.imgproc.Imgproc;

import edu.wpi.first.wpilibj.Joystick;
import edu.wpi.first.wpilibj.SpeedController;
import edu.wpi.first.wpilibj.Talon;
import edu.wpi.first.wpilibj.SampleRobot;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.smartdashboard.SendableChooser;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import com.analog.adis16448.frc.ADIS16448_IMU;

import org.usfirst.frc5490.TestCode2017.GripPipeline;


public class Robot extends SampleRobot {
	
    private SpeedController motor1Right;	
    private SpeedController motor1Left;  
    private SpeedController motor2Left;
    private SpeedController motor2Right;
    
    private Joystick stick1;
    private Joystick stick2;
    
    private static final int IMG_WIDTH = 320;
    private static final int IMG_HEIGHT = 240;
    private VisionThread visionThread;
    private double centerX = 0;
    private final Object imgLock = new Object();
	
    public Robot() {
        
    	motor1Right = new Talon(0);	// RIGHT 
        motor1Left = new Talon(2); // LEFT
        motor2Right = new Talon(1);
        motor2Left = new Talon(3);
              
        stick1 = new Joystick(0);	// initialize the joystick on port 0 RIGHT
        stick2 = new Joystick(1);	// initialize the joystick on port 1 LEFT
             
    }
    
    @Override
    public void robotInit()	{
    	
    	UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
    	camera.setResolution(IMG_WIDTH, IMG_HEIGHT);
    	    	
    	visionThread = new VisionThread(camera, new GripPipeline(), pipeline -> {
    		if(!pipeline.filterContoursOutput().isEmpty())	{
    			Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
    			synchronized (imgLock)	{
    				centerX = r.x + (r.width / 2);
    			}
    		}
    	});
    	visionThread.start();
    }

    public void autonomous() {    	
    	while (true)	{
    		double centerX;
    		synchronized (imgLock)	{
    			centerX = this.centerX;
    		}

    		System.out.println(centerX);
    	}

    }

}

I do not know whether the problem lies in this code or in my GRIP generated code. The parameters in my GRIP code work consistently in the GRIP UI, but I have not been able to generate any dynamic values as outputs when I run code on the RoboRio (if I get any values at all).

I have been trying this for weeks and I do not know what the exact problem is. I would really appreciate some help with the code.

First try chaning putNumber to putNumberArray. I think CenterX is a numberArray. If that does not work try this


// declare it
private NetworkTable table;

// Init NetworkTable
NetworkTable.setClientMode();
NetworkTable.setTeam(6325);
NetworkTable.setIPAddress("roborio-6325-frc.local"); // roborio ip (maybe put // localhost to see)
NetworkTable.initialize();
table = NetworkTable.getTable("GRIP"); // what table data is put in
table.putNumber("CenterX",CenterX) // try .putNumberArray


I don’t know if this could be causing you issues or not, but the main loop in your autonomous code has no delays in it. This is probably hamming the CPU and/or I/O of your roboRIO.

In addition, your autonomous code does not appear to exit cleanly. I’m actually a bit curious what happens if you don’t exit autonomous gracefully (do you need to restart the code each time after starting autonomous?).


    public void autonomous() {    	
    	while (true)	{
    		double centerX;
    		synchronized (imgLock)	{
    			centerX = this.centerX;
    		}

    		System.out.println(centerX);
    	}

    }

Try inserting a slight delay inside the loop to reduce the load and let your while condition verify that you are still in autonomous mode:


    public void autonomous() {    	
    	while (isAutonomous() && isEnabled())	{
    		double centerX;
    		synchronized (imgLock)	{
    			centerX = this.centerX;
    		}

    		System.out.println(centerX);
                SmartDashboard.putNumber("centerX", centerX);

                // Limit output to ~20 lines a second
                Timer.delay(1.0 / 20.0);
    	}

    }

Thanks for your help. I made both those changes, but I am still getting zeros in the console and nothing in Network Tables. I have been using a good Logitech webcam and I switched it out for the Microsoft Lifecam, but it does not make any difference. Any other ideas?

This is a long shot but if the problem is CenterX being an array you would have to change double CenterX to double ] CenterX and keep putNumberArray. I am also working with some vision and I am using arrays for CenterX.

As far as diagnostics, try adding a couple of more smart dashboard outputs. Declare a frameCnt at the top of your class:


// How many frames we received in the pipe line
int frameCnt = 0;

Then bump this count in your vision processing thread.


    	visionThread = new VisionThread(camera, new GripPipeline(), pipeline -> {
                frameCnt++;
    		if(!pipeline.filterContoursOutput().isEmpty())	{
    			Rect r = Imgproc.boundingRect(pipeline.filterContoursOutput().get(0));
    			synchronized (imgLock)	{
    				centerX = r.x + (r.width / 2);
    			}
    		}
    	});

Finally, add another count to your autnomous loop and dump both values to the smart dashboard:


    public void autonomous() {
        int autonCnt = 0;

    	while (isAutonomous() && isEnabled())	{
    		double centerX;
    		synchronized (imgLock)	{
    			centerX = this.centerX;
    		}

    		System.out.println(centerX);

                SmartDashboard.putNumber("centerX", centerX);
                SmartDashboard.putNumber("frameCnt", frameCnt);
                SmartDashboard.putNumber("autonCnt", autonCnt);

                // Limit output to ~20 lines a second
                Timer.delay(1.0 / 20.0);
    	}

    }

This will at least tell you the following:

  • That NetworkTables is running (there should be 3 fields on the smart dashboard)
  • The autonCnt field should be incrementing by 20 roughly once a second.
  • If you are receiving vision information in your pipeline, then your frameCnt should also be incrementing.
  • If you see the frameCnt incrementing, but your centerX does not change, it sounds like your imaging filters are not locating the target.

This won’t help explain why you are not finding any contours, but it will at least provide a check to verify that you are getting to that point in the code.

That was a really good idea. When I put in the frame counter, it outputted zero to the SmartDashboard. I also put a SmartDashboard output in the GripPipeline process() to see if that method was running, but the output never appeared in SmartDashboard:


SmartDashboard.putNumber("test", 12345);

I also noticed that the SmartDashboard camera stream only works with the Microsoft Lifecam, and not my Logitech camera, which is a shame because the Microsoft camera is terribly overexposed, which makes processing difficult.

I also noticed some errors in the console once I stopped outputting centerX to it:


NT: server: client CONNECTED: 172.22.11.1 port 58812 
 ********** Robot program starting ********** 
 CS: USB Camera 0: Connecting to USB camera on /dev/video0 
 CS: USB Camera 0: set format 1 res 160x120 
 CS: USB Camera 0: Connecting to USB camera on /dev/video0 
 CS: USB Camera 0: set format 1 res 320x240 
 CS: USB Camera 0: set FPS to 7 
 CS: ERROR: ioctl VIDIOC_S_EXT_CTRLS failed at UsbCameraProperty.cpp:66: Input/output error (UsbUtil.cpp:122) 
 Exception in thread "NTListener" VideoException [edu.wpi.cscore.VideoException: property write failed] 
 	at edu.wpi.cscore.CameraServerJNI.setProperty(Native Method) 
 	at edu.wpi.cscore.VideoProperty.set(VideoProperty.java:68) 
 	at edu.wpi.first.wpilibj.CameraServer.lambda$new$1(CameraServer.java:498) 
 	at edu.wpi.first.wpilibj.CameraServer$$Lambda$2/30452001.apply(Unknown Source) 
Warning  44003  FRC:  No robot code is currently running.  Driver Station 
 Default disabled() method running, consider providing your own 
 CS: USB Camera 0: set FPS to 0 
 CS: ERROR: ioctl VIDIOC_S_EXT_CTRLS failed at UsbCameraProperty.cpp:66: Input/output error (UsbUtil.cpp:122) 
 Exception in thread "NTListener" VideoException [edu.wpi.cscore.VideoException: property write failed] 
 	at edu.wpi.cscore.CameraServerJNI.setProperty(Native Method) 
 	at edu.wpi.cscore.VideoProperty.set(VideoProperty.java:68) 
 	at edu.wpi.first.wpilibj.CameraServer.lambda$new$1(CameraServer.java:498) 
 	at edu.wpi.first.wpilibj.CameraServer$$Lambda$2/30452001.apply(Unknown Source) 
 CS: ERROR: ioctl VIDIOC_S_CTRL failed at UsbCameraProperty.cpp:72: Input/output error (UsbUtil.cpp:122) 
 Exception in thread "NTListener" VideoException [edu.wpi.cscore.VideoException: property write failed] 
 	at edu.wpi.cscore.CameraServerJNI.setProperty(Native Method) 
 	at edu.wpi.cscore.VideoProperty.set(VideoProperty.java:68) 
 	at edu.wpi.first.wpilibj.CameraServer.lambda$new$1(CameraServer.java:498) 
 	at edu.wpi.first.wpilibj.CameraServer$$Lambda$2/30452001.apply(Unknown Source) 
 Exception in thread "WPILib Vision Thread" java.lang.IllegalStateException: Network tables has already been initialized 
 	at edu.wpi.first.wpilibj.networktables.NetworkTable.checkInit(NetworkTable.java:31) 
 	at edu.wpi.first.wpilibj.networktables.NetworkTable.setClientMode(NetworkTable.java:82) 
 	at org.usfirst.frc5490.TestCode2017.GripPipeline.process(GripPipeline.java:83) 
 	at edu.wpi.first.wpilibj.vision.VisionRunner.runOnce(VisionRunner.java:90) 
 	at edu.wpi.first.wpilibj.vision.VisionRunner.runForever(VisionRunner.java:111) 
 	at edu.wpi.first.wpilibj.vision.VisionThread$$Lambda$4/9727497.run(Unknown Source) 
 	at java.lang.Thread.run(Thread.java:745)

There seems to be some error that is interfering with the vision pipeline.

Thanks to the programmers on team 708 for sharing the solution to this issue with me. The entire problem is with a single line of code. The camera constructor must contain the port of the USB camera, which can be found in the RoboRio’s web dashboard interface. This port is must be listed in the code as a string.

Here is the constructor in our program, which allowed the rest of our vision code to work properly:

camera = new UsbCamera("camera", "cam2");

(“cam2” is the port that was listed on our web dashboard)

I hope this helps anyone with a similar problem in the future!