How do I put the CvSource video in the smart dashboard?


#1

We want to test out the vision processing by seeing the final Mat in the SmartDashboard, but what we concluded is that - that mat is not what we’re seeing in the SmartDashboard when I try to add a CameraServer Stream Viewer (Checked by telling the process() function to add a rectangle over the image). So how do we put this CvSource in the SmartDashboard?


#2

The code for anyone who asks:

Robot.java:


package org.usfirst.frc.team5951.robot;

import org.opencv.core.Mat;
import org.usfirst.frc.team5951.vision.VisionImplementation;

import edu.wpi.cscore.CvSink;
import edu.wpi.cscore.CvSource;
import edu.wpi.cscore.UsbCamera;
import edu.wpi.first.wpilibj.CameraServer;
import edu.wpi.first.wpilibj.SampleRobot;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import edu.wpi.first.wpilibj.vision.VisionThread;

/**
 * This is a demo program showing the use of the RobotDrive class. The
 * SampleRobot class is the base of a robot application that will automatically
 * call your Autonomous and OperatorControl methods at the right time as
 * controlled by the switches on the driver station or the field controls.
 *
 * The VM is configured to automatically run this class, and to call the
 * functions corresponding to each mode, as described in the SampleRobot
 * documentation. If you change the name of this class or the package after
 * creating this project, you must also update the manifest file in the resource
 * directory.
 *
 * WARNING: While it may look like a good choice to use for your code if you're
 * inexperienced, don't. Unless you know what you are doing, complex code will
 * be much more difficult under this system. Use IterativeRobot or Command-Based
 * instead if you're new.
 */
public class Robot extends SampleRobot {

	VisionThread visionThread;
	UsbCamera source;
	CvSink sink;
	CvSource output;
	Mat mat;

	/*
	 * UsbCamera camera = CameraServer.getInstance().startAutomaticCapture();
	 * camera.setResolution(640, 480);
	 * 
	 * CvSink cvSink = CameraServer.getInstance().getVideo(); CvSource
	 * outputStream = CameraServer.getInstance().putVideo("Blur", 640, 480);
	 * 
	 * Mat source = new Mat(); Mat output = new Mat();
	 * 
	 * while(!Thread.interrupted()) { cvSink.grabFrame(source);
	 * Imgproc.cvtColor(source, output, Imgproc.COLOR_BGR2GRAY);
	 * outputStream.putFrame(output); }
	 */
	public Robot() {
		source = CameraServer.getInstance().startAutomaticCapture();
		source.setExposureManual(20);
		source.setResolution(640, 480);
		
		sink = CameraServer.getInstance().getVideo();
		
		output = CameraServer.getInstance().putVideo("Processed: ", 640, 480);
		
		visionThread = new VisionThread(source, new VisionImplementation(), pipeline -> {
			sink.grabFrame(mat);
			pipeline.process(mat);
			
			output.putFrame(pipeline.getImgae());
		});
	}

	@Override
	public void robotInit() {
		
		
	}

	/**
	 * This autonomous (along with the chooser code above) shows how to select
	 * between different autonomous modes using the dashboard. The sendable
	 * chooser code works with the Java SmartDashboard. If you prefer the
	 * LabVIEW Dashboard, remove all of the chooser code and uncomment the
	 * getString line to get the auto name from the text box below the Gyro
	 *
	 * You can add additional auto modes by adding additional comparisons to the
	 * if-else structure below with additional strings. If using the
	 * SendableChooser make sure to add them to the chooser code above as well.
	 */
	@Override
	public void autonomous() {
//		visionThread.start();
	}

	/**
	 * Runs the motors with arcade steering.
	 */
	@Override
	public void operatorControl() {
		
	}

	/**
	 * Runs during test mode
	 */
	@Override
	public void test() {
	}
}


VIsionImplementation is a class we’ve created, and for that example all it did was Normalize the picture, median blured it, and added a rectangle over it.

imageToShow() just returns a Mat of the final “product”.


2019_Vision: How to modify and display camera image CPP
#3

Have you looked at the intermediate vision example?


#4

Yeah, the code is pretty much the same, apart from the fact that they’re using a normal thread instead of the VisionThread.


#5

I think this is your issue:

output.putFrame(pipeline.getImgae());

Also doesn’t the visionthread get created in RobotInit instead of the constructor?


#6

Oh woops, that was just a typo but it’s not the problem because it’s the same typo in both.

Is there a difference in where it’s initialized? Since it only runs when Teleop is running.