This is our third year as an FRC team and we decided to use vision processing for delivering gears in autonomous mode. Once I succeeded in getting the GRIP UI to recognize reflective tape on a laptop with a USB camera and saw a button that said “Generate Code”, I thought that the implementation of the vision system would be simple. That was two weeks ago.
(they don’t seem to have a crying emoji)
My main difficulty has been with the various data types that these libraries use. I need to convert a UsbCamera object to a Mat object so that I can call process() with a USB camera image as the argument.
I added a few lines of code to the GripPipeline that should publish the centerX to Network Tables, but when I run the code (and it does compile), my Network Tables do not contain anything from GRIP; they just have a bunch of information about my USB camera. Here’s the GRIP generated code that I’m using:
package org.usfirst.frc5490.TestCode2017;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.HashMap;
import edu.wpi.first.wpilibj.networktables.NetworkTable;
import edu.wpi.first.wpilibj.vision.VisionPipeline;
import org.opencv.core.*;
import org.opencv.core.Core.*;
import org.opencv.features2d.FeatureDetector;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.*;
import org.opencv.objdetect.*;
/**
* GripPipeline class.
*
* <p>An OpenCV pipeline generated by GRIP.
*
* @author GRIP
*/
public class GripPipeline implements VisionPipeline {
private double centerX = 0;
//Outputs
private Mat blurOutput = new Mat();
private Mat hslThresholdOutput = new Mat();
private ArrayList<MatOfPoint> findContoursOutput = new ArrayList<MatOfPoint>();
private ArrayList<MatOfPoint> filterContoursOutput = new ArrayList<MatOfPoint>();
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
/**
* This is the primary method that runs the entire pipeline and updates the outputs.
*/
@Override public void process(Mat source0) {
// Step Blur0:
Mat blurInput = source0;
BlurType blurType = BlurType.get("Box Blur");
double blurRadius = 13.513513513513505;
blur(blurInput, blurType, blurRadius, blurOutput);
// Step HSL_Threshold0:
Mat hslThresholdInput = blurOutput;
double] hslThresholdHue = {59.89208633093526, 80.67911714770797};
double] hslThresholdSaturation = {52.74280575539568, 255.0};
double] hslThresholdLuminance = {103.19244604316548, 255.0};
hslThreshold(hslThresholdInput, hslThresholdHue, hslThresholdSaturation, hslThresholdLuminance, hslThresholdOutput);
// Step Find_Contours0:
Mat findContoursInput = hslThresholdOutput;
boolean findContoursExternalOnly = false;
findContours(findContoursInput, findContoursExternalOnly, findContoursOutput);
// Step Filter_Contours0:
ArrayList<MatOfPoint> filterContoursContours = findContoursOutput;
double filterContoursMinArea = 50.0;
double filterContoursMinPerimeter = 25.0;
double filterContoursMinWidth = 25.0;
double filterContoursMaxWidth = 1000.0;
double filterContoursMinHeight = 25.0;
double filterContoursMaxHeight = 1000.0;
double] filterContoursSolidity = {0, 100};
double filterContoursMaxVertices = 1000000.0;
double filterContoursMinVertices = 0.0;
double filterContoursMinRatio = 0.0;
double filterContoursMaxRatio = 1000.0;
filterContours(filterContoursContours, filterContoursMinArea, filterContoursMinPerimeter, filterContoursMinWidth, filterContoursMaxWidth, filterContoursMinHeight, filterContoursMaxHeight, filterContoursSolidity, filterContoursMaxVertices, filterContoursMinVertices, filterContoursMinRatio, filterContoursMaxRatio, filterContoursOutput);
Rect r = Imgproc.boundingRect(filterContoursOutput().get(0));
centerX = r.x + (r.width / 2);
NetworkTable.getTable("Grip").putNumber("centerX", centerX);
}
/**
* This method is a generated getter for the output of a Blur.
* @return Mat output from Blur.
*/
public Mat blurOutput() {
return blurOutput;
}
/**
* This method is a generated getter for the output of a HSL_Threshold.
* @return Mat output from HSL_Threshold.
*/
public Mat hslThresholdOutput() {
return hslThresholdOutput;
}
/**
* This method is a generated getter for the output of a Find_Contours.
* @return ArrayList<MatOfPoint> output from Find_Contours.
*/
public ArrayList<MatOfPoint> findContoursOutput() {
return findContoursOutput;
}
/**
* This method is a generated getter for the output of a Filter_Contours.
* @return ArrayList<MatOfPoint> output from Filter_Contours.
*/
public ArrayList<MatOfPoint> filterContoursOutput() {
return filterContoursOutput;
}
/**
* An indication of which type of filter to use for a blur.
* Choices are BOX, GAUSSIAN, MEDIAN, and BILATERAL
*/
enum BlurType{
BOX("Box Blur"), GAUSSIAN("Gaussian Blur"), MEDIAN("Median Filter"),
BILATERAL("Bilateral Filter");
private final String label;
BlurType(String label) {
this.label = label;
}
public static BlurType get(String type) {
if (BILATERAL.label.equals(type)) {
return BILATERAL;
}
else if (GAUSSIAN.label.equals(type)) {
return GAUSSIAN;
}
else if (MEDIAN.label.equals(type)) {
return MEDIAN;
}
else {
return BOX;
}
}
@Override
public String toString() {
return this.label;
}
}
/**
* Softens an image using one of several filters.
* @param input The image on which to perform the blur.
* @param type The blurType to perform.
* @param doubleRadius The radius for the blur.
* @param output The image in which to store the output.
*/
private void blur(Mat input, BlurType type, double doubleRadius,
Mat output) {
int radius = (int)(doubleRadius + 0.5);
int kernelSize;
switch(type){
case BOX:
kernelSize = 2 * radius + 1;
Imgproc.blur(input, output, new Size(kernelSize, kernelSize));
break;
case GAUSSIAN:
kernelSize = 6 * radius + 1;
Imgproc.GaussianBlur(input,output, new Size(kernelSize, kernelSize), radius);
break;
case MEDIAN:
kernelSize = 2 * radius + 1;
Imgproc.medianBlur(input, output, kernelSize);
break;
case BILATERAL:
Imgproc.bilateralFilter(input, output, -1, radius, radius);
break;
}
}
/**
* Segment an image based on hue, saturation, and luminance ranges.
*
* @param input The image on which to perform the HSL threshold.
* @param hue The min and max hue
* @param sat The min and max saturation
* @param lum The min and max luminance
* @param output The image in which to store the output.
*/
private void hslThreshold(Mat input, double] hue, double] sat, double] lum,
Mat out) {
Imgproc.cvtColor(input, out, Imgproc.COLOR_BGR2HLS);
Core.inRange(out, new Scalar(hue[0], lum[0], sat[0]),
new Scalar(hue[1], lum[1], sat[1]), out);
}
/**
* Sets the values of pixels in a binary image to their distance to the nearest black pixel.
* @param input The image on which to perform the Distance Transform.
* @param type The Transform.
* @param maskSize the size of the mask.
* @param output The image in which to store the output.
*/
private void findContours(Mat input, boolean externalOnly,
List<MatOfPoint> contours) {
Mat hierarchy = new Mat();
contours.clear();
int mode;
if (externalOnly) {
mode = Imgproc.RETR_EXTERNAL;
}
else {
mode = Imgproc.RETR_LIST;
}
int method = Imgproc.CHAIN_APPROX_SIMPLE;
Imgproc.findContours(input, contours, hierarchy, mode, method);
}
/**
* Filters out contours that do not meet certain criteria.
* @param inputContours is the input list of contours
* @param output is the the output list of contours
* @param minArea is the minimum area of a contour that will be kept
* @param minPerimeter is the minimum perimeter of a contour that will be kept
* @param minWidth minimum width of a contour
* @param maxWidth maximum width
* @param minHeight minimum height
* @param maxHeight maximimum height
* @param Solidity the minimum and maximum solidity of a contour
* @param minVertexCount minimum vertex Count of the contours
* @param maxVertexCount maximum vertex Count
* @param minRatio minimum ratio of width to height
* @param maxRatio maximum ratio of width to height
*/
private void filterContours(List<MatOfPoint> inputContours, double minArea,
double minPerimeter, double minWidth, double maxWidth, double minHeight, double
maxHeight, double] solidity, double maxVertexCount, double minVertexCount, double
minRatio, double maxRatio, List<MatOfPoint> output) {
final MatOfInt hull = new MatOfInt();
output.clear();
//operation
for (int i = 0; i < inputContours.size(); i++) {
final MatOfPoint contour = inputContours.get(i);
final Rect bb = Imgproc.boundingRect(contour);
if (bb.width < minWidth || bb.width > maxWidth) continue;
if (bb.height < minHeight || bb.height > maxHeight) continue;
final double area = Imgproc.contourArea(contour);
if (area < minArea) continue;
if (Imgproc.arcLength(new MatOfPoint2f(contour.toArray()), true) < minPerimeter) continue;
Imgproc.convexHull(contour, hull);
MatOfPoint mopHull = new MatOfPoint();
mopHull.create((int) hull.size().height, 1, CvType.CV_32SC2);
for (int j = 0; j < hull.size().height; j++) {
int index = (int)hull.get(j, 0)[0];
double] point = new double] { contour.get(index, 0)[0], contour.get(index, 0)[1]};
mopHull.put(j, 0, point);
}
final double solid = 100 * area / Imgproc.contourArea(mopHull);
if (solid < solidity[0] || solid > solidity[1]) continue;
if (contour.rows() < minVertexCount || contour.rows() > maxVertexCount) continue;
final double ratio = bb.width / (double)bb.height;
if (ratio < minRatio || ratio > maxRatio) continue;
output.add(contour);
}
}
}
Note that I have tried this code straight from GRIP (without changing anything first) and I still get the same result, which is no result at all. Since nothing is published to Network Tables, I believe that that process() is not being called correctly. I don’t know how to convert the UsbCamera object “camera”, to the Mat source type that is accepted by process(). Here is the code in my main Robot class that I am using to attempt to call process() and output the number of contours in the frame (ideally 1, as I perform each trial while holding up a single piece of reflective tape in front of the robot) to the console. I am not getting any values in the console either, just some stuff about my camera running at 30fps. Anyway, here it is:
public void autonomous() {
UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
camera.setResolution(IMG_WIDTH, IMG_HEIGHT);
CvSink cvSink = CameraServer.getInstance().getVideo();
Mat mat = new Mat();
if (cvSink.grabFrame(mat) == 0)
{
System.out.println(cvSink.getError());
}
else
{
GripPipeline grip = new GripPipeline();
while (true)
{
grip.process(mat);
System.out.println(grip.filterContoursOutput().size());
}
}
}
I have tried hundreds of variations for this code that I have found on various threads/tutorials and I have had no success. I am new to these libraries and I do not know anyone who knows the first thing about how to do this. I have read all the screensteps tutorials a dozen times and used the code they provide, but I do not get any results. I have tried code that uses VisionThread pipelines, though I cannot, with all my years of coding experience, make any sense of what they actually do, and I have been met with the same lack of results.
Any help with this, and especially, any code that would convert the data from my USB camera to a Mat source, would be greatly appreciated.