The code for anyone who asks:
Robot.java:
Code:
package org.usfirst.frc.team5951.robot;
import org.opencv.core.Mat;
import org.usfirst.frc.team5951.vision.VisionImplementation;
import edu.wpi.cscore.CvSink;
import edu.wpi.cscore.CvSource;
import edu.wpi.cscore.UsbCamera;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.SampleRobot;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import edu.wpi.first.wpilibj.vision.VisionThread;
/**
* This is a demo program showing the use of the RobotDrive class. The
* SampleRobot class is the base of a robot application that will automatically
* call your Autonomous and OperatorControl methods at the right time as
* controlled by the switches on the driver station or the field controls.
*
* The VM is configured to automatically run this class, and to call the
* functions corresponding to each mode, as described in the SampleRobot
* documentation. If you change the name of this class or the package after
* creating this project, you must also update the manifest file in the resource
* directory.
*
* WARNING: While it may look like a good choice to use for your code if you're
* inexperienced, don't. Unless you know what you are doing, complex code will
* be much more difficult under this system. Use IterativeRobot or Command-Based
* instead if you're new.
*/
public class Robot extends SampleRobot {
VisionThread visionThread;
UsbCamera source;
CvSink sink;
CvSource output;
Mat mat;
/*
* UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
* camera.setResolution(640, 480);
*
* CvSink cvSink = CameraServer.getInstance().getVideo(); CvSource
* outputStream = CameraServer.getInstance().putVideo("Blur", 640, 480);
*
* Mat source = new Mat(); Mat output = new Mat();
*
* while(!Thread.interrupted()) { cvSink.grabFrame(source);
* Imgproc.cvtColor(source, output, Imgproc.COLOR_BGR2GRAY);
* outputStream.putFrame(output); }
*/
public Robot() {
source = CameraServer.getInstance().startAutomaticCapture();
source.setExposureManual(20);
source.setResolution(640, 480);
sink = CameraServer.getInstance().getVideo();
output = CameraServer.getInstance().putVideo("Processed: ", 640, 480);
visionThread = new VisionThread(source, new VisionImplementation(), pipeline -> {
sink.grabFrame(mat);
pipeline.process(mat);
output.putFrame(pipeline.getImgae());
});
}
@Override
public void robotInit() {
}
/**
* This autonomous (along with the chooser code above) shows how to select
* between different autonomous modes using the dashboard. The sendable
* chooser code works with the Java SmartDashboard. If you prefer the
* LabVIEW Dashboard, remove all of the chooser code and uncomment the
* getString line to get the auto name from the text box below the Gyro
*
* You can add additional auto modes by adding additional comparisons to the
* if-else structure below with additional strings. If using the
* SendableChooser make sure to add them to the chooser code above as well.
*/
@Override
public void autonomous() {
// visionThread.start();
}
/**
* Runs the motors with arcade steering.
*/
@Override
public void operatorControl() {
}
/**
* Runs during test mode
*/
@Override
public void test() {
}
}
VIsionImplementation is a class we've created, and for that example all it did was Normalize the picture, median blured it, and added a rectangle over it.
imageToShow() just returns a Mat of the final "product".