Go to Post We are all just pawns in FIRST's quest for global domination. - Tom Bottiglieri [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 15-01-2017, 21:43
NullException33 NullException33 is offline
Registered User
FRC #3944
 
Join Date: Dec 2015
Location: Arizona
Posts: 5
NullException33 is an unknown quantity at this point
Beginning Vision Processing

Hello, this year my team decided we want to use a camera for image processes. I am experienced in java but have non in any vision things and I was wondering where a good place to start is. I would like to use only java to do all my processing. So far I was able to open and upload the Vision sample projects and get them working with the axis camera we have. Any tips on where to start with processing the reflective tape?
Reply With Quote
  #2   Spotlight this post!  
Unread 15-01-2017, 22:15
Justin Buist Justin Buist is offline
Registered User
FRC #4003 (TriSonics)
Team Role: Mentor
 
Join Date: Feb 2015
Rookie Year: 2015
Location: Allendale, MI
Posts: 27
Justin Buist is an unknown quantity at this point
Re: Beginning Vision Processing

GRIP is kind of handy for getting started with vision processing. Slap a USB camera into a laptop running GRIP and you can start working through some basic transformations and watch what they do visually. Grab an image, apply an HSV threshold, find conturs, and then check out what they look like.

You'll want to get a light ring around your camera to hit the retroreflective tape and make it all appear like one easily identifiable color. Green is popular because red and blue lights will be on the field and your robot can get confused by them. This year 4003 is experimenting with IR light to avoid any confusion with visible light sources. Kinda neat.

Edited to add: Once you get a handle on what OpenCV can do through GRIP then you can either have it generate code for you (never tried that, it's new this year) or just figure out how to code it up yourself. Once you start getting the lingo of OpenCV it's pretty easy.

Last edited by Justin Buist : 15-01-2017 at 22:24.
Reply With Quote
  #3   Spotlight this post!  
Unread 15-01-2017, 22:33
SamCarlberg's Avatar
SamCarlberg SamCarlberg is offline
GRIP, WPILib. 2084 alum
FRC #2084
Team Role: Mentor
 
Join Date: Nov 2015
Rookie Year: 2009
Location: MA
Posts: 136
SamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to behold
Re: Beginning Vision Processing

Take a look at the screensteps articles on vision processing: http://wpilib.screenstepslive.com/s/4485/m/24194

Pay attention to this one as well
__________________
WPILib
GRIP, RobotBuilder
Reply With Quote
  #4   Spotlight this post!  
Unread 16-01-2017, 01:29
NullException33 NullException33 is offline
Registered User
FRC #3944
 
Join Date: Dec 2015
Location: Arizona
Posts: 5
NullException33 is an unknown quantity at this point
Re: Beginning Vision Processing

Thanks for replying! We have already read that document. If it helps you, we are not new to java we just don't really understand the FRC Sample Code and how to take that sample to the next level in image processing. If you have any tips for advancing on the sample code that would be great!
Reply With Quote
  #5   Spotlight this post!  
Unread 16-01-2017, 08:00
YairZiv's Avatar
YairZiv YairZiv is online now
Registered User
FRC #5951 (Makers Assemble)
Team Role: Programmer
 
Join Date: Oct 2016
Rookie Year: 2016
Location: Tel Aviv, Israel
Posts: 34
YairZiv is an unknown quantity at this point
Re: Beginning Vision Processing

Quote:
Originally Posted by NullException33 View Post
Hello, this year my team decided we want to use a camera for image processes. I am experienced in java but have non in any vision things and I was wondering where a good place to start is. I would like to use only java to do all my processing. So far I was able to open and upload the Vision sample projects and get them working with the axis camera we have. Any tips on where to start with processing the reflective tape?
What I've done, is created a simple vision sample in GRiP, used the generate code feature, looked and studied the code and how it works, and what each function does, and using that I've started programming an image processing function myself. That was great for me, hope it will help you too.
Reply With Quote
  #6   Spotlight this post!  
Unread 16-01-2017, 11:42
NullException33 NullException33 is offline
Registered User
FRC #3944
 
Join Date: Dec 2015
Location: Arizona
Posts: 5
NullException33 is an unknown quantity at this point
Re: Beginning Vision Processing

So where would you suggest starting with this sample of code. More specifically which object could I use in this sample code to retrieve a live image from and begin processing? I am having trouble determining how to retrieve the live image.

Also when generating code from GRIP did it look anything like this sample?

Thank you for your suggestions!


import edu.wpi.cscore.AxisCamera;
import edu.wpi.cscore.CvSink;
import edu.wpi.cscore.CvSource;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.IterativeRobot;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.imgproc.Imgproc;

/**
* This is a demo program showing the use of OpenCV to do vision processing. The
* image is acquired from the Axis camera, then a rectangle is put on the image and
* sent to the dashboard. OpenCV has many methods for different types of
* processing.
*/
public class Robot extends IterativeRobot {
Thread visionThread;

@Override
public void robotInit() {
visionThread = new Thread(() -> {
// Get the Axis camera from CameraServer
AxisCamera camera = CameraServer.getInstance().addAxisCamera("axis-camera.local");
// Set the resolution
camera.setResolution(640, 480);

// Get a CvSink. This will capture Mats from the camera
CvSink cvSink = CameraServer.getInstance().getVideo();
// Setup a CvSource. This will send images back to the Dashboard
CvSource outputStream = CameraServer.getInstance().putVideo("Rectangle", 640, 480);

// Mats are very memory expensive. Lets reuse this Mat.
Mat mat = new Mat();

// This cannot be 'true'. The program will never exit if it is. This
// lets the robot stop this thread when restarting robot code or
// deploying.
while (!Thread.interrupted()) {
// Tell the CvSink to grab a frame from the camera and put it
// in the source mat. If there is an error notify the output.
if (cvSink.grabFrame(mat) == 0) {
// Send the output the error.
outputStream.notifyError(cvSink.getError());
// skip the rest of the current iteration
continue;
}
// Put a rectangle on the image
Imgproc.rectangle(mat, new Point(100, 100), new Point(400, 400),
new Scalar(255, 255, 255), 5);
// Give the output stream a new image to display
outputStream.putFrame(mat);
}
});
visionThread.setDaemon(true);
visionThread.start();
}
}
Reply With Quote
  #7   Spotlight this post!  
Unread 16-01-2017, 12:41
SamCarlberg's Avatar
SamCarlberg SamCarlberg is offline
GRIP, WPILib. 2084 alum
FRC #2084
Team Role: Mentor
 
Join Date: Nov 2015
Rookie Year: 2009
Location: MA
Posts: 136
SamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to behold
Re: Beginning Vision Processing

That sample is for streaming some image back to the driver station. Take a look at the documentation for the edu.wpi.first.wpilibj.vision package and this screensteps page for a full example
__________________
WPILib
GRIP, RobotBuilder
Reply With Quote
  #8   Spotlight this post!  
Unread 16-01-2017, 13:44
YairZiv's Avatar
YairZiv YairZiv is online now
Registered User
FRC #5951 (Makers Assemble)
Team Role: Programmer
 
Join Date: Oct 2016
Rookie Year: 2016
Location: Tel Aviv, Israel
Posts: 34
YairZiv is an unknown quantity at this point
Re: Beginning Vision Processing

Quote:
Originally Posted by NullException33 View Post
So where would you suggest starting with this sample of code. More specifically which object could I use in this sample code to retrieve a live image from and begin processing? I am having trouble determining how to retrieve the live image.

Also when generating code from GRIP did it look anything like this sample?

Thank you for your suggestions!


import edu.wpi.cscore.AxisCamera;
import edu.wpi.cscore.CvSink;
import edu.wpi.cscore.CvSource;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.IterativeRobot;
import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.imgproc.Imgproc;

/**
* This is a demo program showing the use of OpenCV to do vision processing. The
* image is acquired from the Axis camera, then a rectangle is put on the image and
* sent to the dashboard. OpenCV has many methods for different types of
* processing.
*/
public class Robot extends IterativeRobot {
Thread visionThread;

@Override
public void robotInit() {
visionThread = new Thread(() -> {
// Get the Axis camera from CameraServer
AxisCamera camera = CameraServer.getInstance().addAxisCamera("axis-camera.local");
// Set the resolution
camera.setResolution(640, 480);

// Get a CvSink. This will capture Mats from the camera
CvSink cvSink = CameraServer.getInstance().getVideo();
// Setup a CvSource. This will send images back to the Dashboard
CvSource outputStream = CameraServer.getInstance().putVideo("Rectangle", 640, 480);

// Mats are very memory expensive. Lets reuse this Mat.
Mat mat = new Mat();

// This cannot be 'true'. The program will never exit if it is. This
// lets the robot stop this thread when restarting robot code or
// deploying.
while (!Thread.interrupted()) {
// Tell the CvSink to grab a frame from the camera and put it
// in the source mat. If there is an error notify the output.
if (cvSink.grabFrame(mat) == 0) {
// Send the output the error.
outputStream.notifyError(cvSink.getError());
// skip the rest of the current iteration
continue;
}
// Put a rectangle on the image
Imgproc.rectangle(mat, new Point(100, 100), new Point(400, 400),
new Scalar(255, 255, 255), 5);
// Give the output stream a new image to display
outputStream.putFrame(mat);
}
});
visionThread.setDaemon(true);
visionThread.start();
}
}
It lookes like this:
Code:
package org.frc.team;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.HashMap;

import edu.wpi.first.wpilibj.vision.VisionPipeline;

import org.opencv.core.*;
import org.opencv.core.Core.*;
import org.opencv.features2d.FeatureDetector;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.*;
import org.opencv.objdetect.*;

/**
* GripPipeline class.
*
* <p>An OpenCV pipeline generated by GRIP.
*
* @author GRIP
*/
public class GripPipeline implements VisionPipeline {

	//Outputs
	private Mat hslThresholdOutput = new Mat();
	private ArrayList<MatOfPoint> findContoursOutput = new ArrayList<MatOfPoint>();
	private ArrayList<MatOfPoint> filterContoursOutput = new ArrayList<MatOfPoint>();

	static {
		System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
	}

	/**
	 * This is the primary method that runs the entire pipeline and updates the outputs.
	 */
	@Override	public void process(Mat source0) {
		// Step HSL_Threshold0:
		Mat hslThresholdInput = source0;
		double[] hslThresholdHue = {77.6978417266187, 92.45733788395904};
		double[] hslThresholdSaturation = {171.98741007194243, 255.0};
		double[] hslThresholdLuminance = {43.57014388489208, 255.0};
		hslThreshold(hslThresholdInput, hslThresholdHue, hslThresholdSaturation, hslThresholdLuminance, hslThresholdOutput);

		// Step Find_Contours0:
		Mat findContoursInput = hslThresholdOutput;
		boolean findContoursExternalOnly = false;
		findContours(findContoursInput, findContoursExternalOnly, findContoursOutput);

		// Step Filter_Contours0:
		ArrayList<MatOfPoint> filterContoursContours = findContoursOutput;
		double filterContoursMinArea = 125.0;
		double filterContoursMinPerimeter = 0.0;
		double filterContoursMinWidth = 0.0;
		double filterContoursMaxWidth = 1000.0;
		double filterContoursMinHeight = 0.0;
		double filterContoursMaxHeight = 1000.0;
		double[] filterContoursSolidity = {0, 100};
		double filterContoursMaxVertices = 1000000.0;
		double filterContoursMinVertices = 0.0;
		double filterContoursMinRatio = 0.0;
		double filterContoursMaxRatio = 1000.0;
		filterContours(filterContoursContours, filterContoursMinArea, filterContoursMinPerimeter, filterContoursMinWidth, filterContoursMaxWidth, filterContoursMinHeight, filterContoursMaxHeight, filterContoursSolidity, filterContoursMaxVertices, filterContoursMinVertices, filterContoursMinRatio, filterContoursMaxRatio, filterContoursOutput);

	}

	/**
	 * This method is a generated getter for the output of a HSL_Threshold.
	 * @return Mat output from HSL_Threshold.
	 */
	public Mat hslThresholdOutput() {
		return hslThresholdOutput;
	}

	/**
	 * This method is a generated getter for the output of a Find_Contours.
	 * @return ArrayList<MatOfPoint> output from Find_Contours.
	 */
	public ArrayList<MatOfPoint> findContoursOutput() {
		return findContoursOutput;
	}

	/**
	 * This method is a generated getter for the output of a Filter_Contours.
	 * @return ArrayList<MatOfPoint> output from Filter_Contours.
	 */
	public ArrayList<MatOfPoint> filterContoursOutput() {
		return filterContoursOutput;
	}


	/**
	 * Segment an image based on hue, saturation, and luminance ranges.
	 *
	 * @param input The image on which to perform the HSL threshold.
	 * @param hue The min and max hue
	 * @param sat The min and max saturation
	 * @param lum The min and max luminance
	 * @param output The image in which to store the output.
	 */
	private void hslThreshold(Mat input, double[] hue, double[] sat, double[] lum,
		Mat out) {
		Imgproc.cvtColor(input, out, Imgproc.COLOR_BGR2HLS);
		Core.inRange(out, new Scalar(hue[0], lum[0], sat[0]),
			new Scalar(hue[1], lum[1], sat[1]), out);
	}

	/**
	 * Sets the values of pixels in a binary image to their distance to the nearest black pixel.
	 * @param input The image on which to perform the Distance Transform.
	 * @param type The Transform.
	 * @param maskSize the size of the mask.
	 * @param output The image in which to store the output.
	 */
	private void findContours(Mat input, boolean externalOnly,
		List<MatOfPoint> contours) {
		Mat hierarchy = new Mat();
		contours.clear();
		int mode;
		if (externalOnly) {
			mode = Imgproc.RETR_EXTERNAL;
		}
		else {
			mode = Imgproc.RETR_LIST;
		}
		int method = Imgproc.CHAIN_APPROX_SIMPLE;
		Imgproc.findContours(input, contours, hierarchy, mode, method);
	}


	/**
	 * Filters out contours that do not meet certain criteria.
	 * @param inputContours is the input list of contours
	 * @param output is the the output list of contours
	 * @param minArea is the minimum area of a contour that will be kept
	 * @param minPerimeter is the minimum perimeter of a contour that will be kept
	 * @param minWidth minimum width of a contour
	 * @param maxWidth maximum width
	 * @param minHeight minimum height
	 * @param maxHeight maximimum height
	 * @param Solidity the minimum and maximum solidity of a contour
	 * @param minVertexCount minimum vertex Count of the contours
	 * @param maxVertexCount maximum vertex Count
	 * @param minRatio minimum ratio of width to height
	 * @param maxRatio maximum ratio of width to height
	 */
	private void filterContours(List<MatOfPoint> inputContours, double minArea,
		double minPerimeter, double minWidth, double maxWidth, double minHeight, double
		maxHeight, double[] solidity, double maxVertexCount, double minVertexCount, double
		minRatio, double maxRatio, List<MatOfPoint> output) {
		final MatOfInt hull = new MatOfInt();
		output.clear();
		//operation
		for (int i = 0; i < inputContours.size(); i++) {
			final MatOfPoint contour = inputContours.get(i);
			final Rect bb = Imgproc.boundingRect(contour);
			if (bb.width < minWidth || bb.width > maxWidth) continue;
			if (bb.height < minHeight || bb.height > maxHeight) continue;
			final double area = Imgproc.contourArea(contour);
			if (area < minArea) continue;
			if (Imgproc.arcLength(new MatOfPoint2f(contour.toArray()), true) < minPerimeter) continue;
			Imgproc.convexHull(contour, hull);
			MatOfPoint mopHull = new MatOfPoint();
			mopHull.create((int) hull.size().height, 1, CvType.CV_32SC2);
			for (int j = 0; j < hull.size().height; j++) {
				int index = (int)hull.get(j, 0)[0];
				double[] point = new double[] { contour.get(index, 0)[0], contour.get(index, 0)[1]};
				mopHull.put(j, 0, point);
			}
			final double solid = 100 * area / Imgproc.contourArea(mopHull);
			if (solid < solidity[0] || solid > solidity[1]) continue;
			if (contour.rows() < minVertexCount || contour.rows() > maxVertexCount)	continue;
			final double ratio = bb.width / (double)bb.height;
			if (ratio < minRatio || ratio > maxRatio) continue;
			output.add(contour);
		}
	}
}
And the commenting on it is pretty nice and easy to understand.
Reply With Quote
  #9   Spotlight this post!  
Unread 16-01-2017, 15:37
NullException33 NullException33 is offline
Registered User
FRC #3944
 
Join Date: Dec 2015
Location: Arizona
Posts: 5
NullException33 is an unknown quantity at this point
Re: Beginning Vision Processing

Thank you so much, the comments help a lot!
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 12:35.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi