Hello, this year my team decided we want to use a camera for image processes. I am experienced in java but have non in any vision things and I was wondering where a good place to start is. I would like to use only java to do all my processing. So far I was able to open and upload the Vision sample projects and get them working with the axis camera we have. Any tips on where to start with processing the reflective tape?
GRIP is kind of handy for getting started with vision processing. Slap a USB camera into a laptop running GRIP and you can start working through some basic transformations and watch what they do visually. Grab an image, apply an HSV threshold, find conturs, and then check out what they look like.
You’ll want to get a light ring around your camera to hit the retroreflective tape and make it all appear like one easily identifiable color. Green is popular because red and blue lights will be on the field and your robot can get confused by them. This year 4003 is experimenting with IR light to avoid any confusion with visible light sources. Kinda neat.
Edited to add: Once you get a handle on what OpenCV can do through GRIP then you can either have it generate code for you (never tried that, it’s new this year) or just figure out how to code it up yourself. Once you start getting the lingo of OpenCV it’s pretty easy.
Take a look at the screensteps articles on vision processing: http://wpilib.screenstepslive.com/s/4485/m/24194
Pay attention to this one as well
Thanks for replying! We have already read that document. If it helps you, we are not new to java we just don’t really understand the FRC Sample Code and how to take that sample to the next level in image processing. If you have any tips for advancing on the sample code that would be great!
What I’ve done, is created a simple vision sample in GRiP, used the generate code feature, looked and studied the code and how it works, and what each function does, and using that I’ve started programming an image processing function myself. That was great for me, hope it will help you too.
So where would you suggest starting with this sample of code. More specifically which object could I use in this sample code to retrieve a live image from and begin processing? I am having trouble determining how to retrieve the live image.
Also when generating code from GRIP did it look anything like this sample?
Thank you for your suggestions!
import edu.wpi.cscore.AxisCamera;
import edu.wpi.cscore.CvSink;
import edu.wpi.cscore.CvSource;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.IterativeRobot;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.imgproc.Imgproc;
/**
-
This is a demo program showing the use of OpenCV to do vision processing. The
-
image is acquired from the Axis camera, then a rectangle is put on the image and
-
sent to the dashboard. OpenCV has many methods for different types of
-
processing.
*/
public class Robot extends IterativeRobot {
Thread visionThread;@Override
public void robotInit() {
visionThread = new Thread(() -> {
// Get the Axis camera from CameraServer
AxisCamera camera = CameraServer.getInstance().addAxisCamera(“axis-camera.local”);
// Set the resolution
camera.setResolution(640, 480);// Get a CvSink. This will capture Mats from the camera CvSink cvSink = CameraServer.getInstance().getVideo(); // Setup a CvSource. This will send images back to the Dashboard CvSource outputStream = CameraServer.getInstance().putVideo("Rectangle", 640, 480); // Mats are very memory expensive. Lets reuse this Mat. Mat mat = new Mat(); // This cannot be 'true'. The program will never exit if it is. This // lets the robot stop this thread when restarting robot code or // deploying. while (!Thread.interrupted()) { // Tell the CvSink to grab a frame from the camera and put it // in the source mat. If there is an error notify the output. if (cvSink.grabFrame(mat) == 0) { // Send the output the error. outputStream.notifyError(cvSink.getError()); // skip the rest of the current iteration continue; } // Put a rectangle on the image Imgproc.rectangle(mat, new Point(100, 100), new Point(400, 400), new Scalar(255, 255, 255), 5); // Give the output stream a new image to display outputStream.putFrame(mat); } }); visionThread.setDaemon(true); visionThread.start();
}
}
That sample is for streaming some image back to the driver station. Take a look at the documentation for the edu.wpi.first.wpilibj.vision package and this screensteps page for a full example
It lookes like this:
package org.frc.team;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.HashMap;
import edu.wpi.first.wpilibj.vision.VisionPipeline;
import org.opencv.core.*;
import org.opencv.core.Core.*;
import org.opencv.features2d.FeatureDetector;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.*;
import org.opencv.objdetect.*;
/**
* GripPipeline class.
*
* <p>An OpenCV pipeline generated by GRIP.
*
* @author GRIP
*/
public class GripPipeline implements VisionPipeline {
//Outputs
private Mat hslThresholdOutput = new Mat();
private ArrayList<MatOfPoint> findContoursOutput = new ArrayList<MatOfPoint>();
private ArrayList<MatOfPoint> filterContoursOutput = new ArrayList<MatOfPoint>();
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
/**
* This is the primary method that runs the entire pipeline and updates the outputs.
*/
@Override public void process(Mat source0) {
// Step HSL_Threshold0:
Mat hslThresholdInput = source0;
double] hslThresholdHue = {77.6978417266187, 92.45733788395904};
double] hslThresholdSaturation = {171.98741007194243, 255.0};
double] hslThresholdLuminance = {43.57014388489208, 255.0};
hslThreshold(hslThresholdInput, hslThresholdHue, hslThresholdSaturation, hslThresholdLuminance, hslThresholdOutput);
// Step Find_Contours0:
Mat findContoursInput = hslThresholdOutput;
boolean findContoursExternalOnly = false;
findContours(findContoursInput, findContoursExternalOnly, findContoursOutput);
// Step Filter_Contours0:
ArrayList<MatOfPoint> filterContoursContours = findContoursOutput;
double filterContoursMinArea = 125.0;
double filterContoursMinPerimeter = 0.0;
double filterContoursMinWidth = 0.0;
double filterContoursMaxWidth = 1000.0;
double filterContoursMinHeight = 0.0;
double filterContoursMaxHeight = 1000.0;
double] filterContoursSolidity = {0, 100};
double filterContoursMaxVertices = 1000000.0;
double filterContoursMinVertices = 0.0;
double filterContoursMinRatio = 0.0;
double filterContoursMaxRatio = 1000.0;
filterContours(filterContoursContours, filterContoursMinArea, filterContoursMinPerimeter, filterContoursMinWidth, filterContoursMaxWidth, filterContoursMinHeight, filterContoursMaxHeight, filterContoursSolidity, filterContoursMaxVertices, filterContoursMinVertices, filterContoursMinRatio, filterContoursMaxRatio, filterContoursOutput);
}
/**
* This method is a generated getter for the output of a HSL_Threshold.
* @return Mat output from HSL_Threshold.
*/
public Mat hslThresholdOutput() {
return hslThresholdOutput;
}
/**
* This method is a generated getter for the output of a Find_Contours.
* @return ArrayList<MatOfPoint> output from Find_Contours.
*/
public ArrayList<MatOfPoint> findContoursOutput() {
return findContoursOutput;
}
/**
* This method is a generated getter for the output of a Filter_Contours.
* @return ArrayList<MatOfPoint> output from Filter_Contours.
*/
public ArrayList<MatOfPoint> filterContoursOutput() {
return filterContoursOutput;
}
/**
* Segment an image based on hue, saturation, and luminance ranges.
*
* @param input The image on which to perform the HSL threshold.
* @param hue The min and max hue
* @param sat The min and max saturation
* @param lum The min and max luminance
* @param output The image in which to store the output.
*/
private void hslThreshold(Mat input, double] hue, double] sat, double] lum,
Mat out) {
Imgproc.cvtColor(input, out, Imgproc.COLOR_BGR2HLS);
Core.inRange(out, new Scalar(hue[0], lum[0], sat[0]),
new Scalar(hue[1], lum[1], sat[1]), out);
}
/**
* Sets the values of pixels in a binary image to their distance to the nearest black pixel.
* @param input The image on which to perform the Distance Transform.
* @param type The Transform.
* @param maskSize the size of the mask.
* @param output The image in which to store the output.
*/
private void findContours(Mat input, boolean externalOnly,
List<MatOfPoint> contours) {
Mat hierarchy = new Mat();
contours.clear();
int mode;
if (externalOnly) {
mode = Imgproc.RETR_EXTERNAL;
}
else {
mode = Imgproc.RETR_LIST;
}
int method = Imgproc.CHAIN_APPROX_SIMPLE;
Imgproc.findContours(input, contours, hierarchy, mode, method);
}
/**
* Filters out contours that do not meet certain criteria.
* @param inputContours is the input list of contours
* @param output is the the output list of contours
* @param minArea is the minimum area of a contour that will be kept
* @param minPerimeter is the minimum perimeter of a contour that will be kept
* @param minWidth minimum width of a contour
* @param maxWidth maximum width
* @param minHeight minimum height
* @param maxHeight maximimum height
* @param Solidity the minimum and maximum solidity of a contour
* @param minVertexCount minimum vertex Count of the contours
* @param maxVertexCount maximum vertex Count
* @param minRatio minimum ratio of width to height
* @param maxRatio maximum ratio of width to height
*/
private void filterContours(List<MatOfPoint> inputContours, double minArea,
double minPerimeter, double minWidth, double maxWidth, double minHeight, double
maxHeight, double] solidity, double maxVertexCount, double minVertexCount, double
minRatio, double maxRatio, List<MatOfPoint> output) {
final MatOfInt hull = new MatOfInt();
output.clear();
//operation
for (int i = 0; i < inputContours.size(); i++) {
final MatOfPoint contour = inputContours.get(i);
final Rect bb = Imgproc.boundingRect(contour);
if (bb.width < minWidth || bb.width > maxWidth) continue;
if (bb.height < minHeight || bb.height > maxHeight) continue;
final double area = Imgproc.contourArea(contour);
if (area < minArea) continue;
if (Imgproc.arcLength(new MatOfPoint2f(contour.toArray()), true) < minPerimeter) continue;
Imgproc.convexHull(contour, hull);
MatOfPoint mopHull = new MatOfPoint();
mopHull.create((int) hull.size().height, 1, CvType.CV_32SC2);
for (int j = 0; j < hull.size().height; j++) {
int index = (int)hull.get(j, 0)[0];
double] point = new double] { contour.get(index, 0)[0], contour.get(index, 0)[1]};
mopHull.put(j, 0, point);
}
final double solid = 100 * area / Imgproc.contourArea(mopHull);
if (solid < solidity[0] || solid > solidity[1]) continue;
if (contour.rows() < minVertexCount || contour.rows() > maxVertexCount) continue;
final double ratio = bb.width / (double)bb.height;
if (ratio < minRatio || ratio > maxRatio) continue;
output.add(contour);
}
}
}
And the commenting on it is pretty nice and easy to understand.
Thank you so much, the comments help a lot!