2019_Vision: How to modify and display camera image CPP


We are using a Raspberry Pi for vision co-processing. We started with the existing CPP example program. We are able to compile, link and download and are confident that these steps work correctly.

As part of image processing, we would like to modify the image and have the modified image displayed on the drivers station. The problem is, although we modify the image, we do not “see” the modified image. I believe that we only see the original image. The following is a simple example of us changing the image.

class MyPipeline : public frc::VisionPipeline {
int val = 0;

void Process(cv::Mat& mat) override {
cv::rectangle(mat, cv::Point(10, 10), cv::Point(100, 100), cv::Scalar(0, 255, 0),5);

I expected that any modifications to the image (mat) would then be sent out. Am I missing some step?

Thanks in advance for your help!


The mat provided to Process is not sent out. If you want to send a modified image, you will need send it out on a separate stream through CameraServer::PutVideo() and CvSource. One example here (it’s Java, but the same principle applies): How do I put the CvSource video in the smart dashboard?


This worked for us.

Thank you very much.


Hi PaulP:
I am having precisely the same problem. Did you manage to get this working using Java or C++. I’m using C++ and the example that is being referred to is in Java. While I am sure it does indeed work the same way in theory, I cannot seem to make sense of it in C++.
If you could advise, I would appreciate it.


Again, another simple Java example but using the C++ FRC namespace you should be able to adapt what I have here fairly simply (I annotated the code with a few comments so you can see what each step is doing along the way):

   * Example pipeline.
  public static class MyPipeline implements VisionPipeline {
    public int val;
    CvSource outputStream = CameraServer.getInstance().putVideo("Circle", 320, 240); // WPILib outputstream for modified image stream
    Mat outMat = new Mat(); // Image Matrix object used in the modifications
    public void process(Mat mat) { // the method that gets called in the looping pipeline
      mat.copyTo(outMat);  // copy the input image for the output
      Imgproc.circle(outMat, new Point(160, 120), 15, new Scalar(0,0,255));  // draw a circle dead center in the image

      outputStream.putFrame(outMat); // Put it on the new camera stream


Thank you very much! I was able to get it working after a lot of trial and error - mostly futzing around with getting the name spaces right.

If anyone is interested in how Newtons Java code, provided above, looks in C++:

// example pipeline
class MyPipeline : public frc::VisionPipeline 
  int val = 0;
	cs::CvSource outputStream = frc::CameraServer::GetInstance()->PutVideo("rPi-R3P2", 320, 240);
	cv::Mat outMat;
  void Process(cv::Mat& mat) override {
	cv::circle(outMat, cv::Point(160, 120), 50, cv::Scalar(255,0,255), 10);
	outputStream.PutFrame(outMat); // Put it on the new camera stream


I had one other question:

I would like to now start working on some video processing algorithms but I will not have access to our teams drive base and the RoboRio. I will have the Raspberry Pi3, however. I can communicate with the rPi by connecting it to a local wifi access point or by connecting it directly to my computer through an ethernet cable.

In the example that I just got working, I was able to draw a circle on top of the image that was being sent to the FRC Driver Station. However, I found that the image that is delivered to the FRCVision-rPi web-site (that thing you get when you type “frcvision.local” into your browser URL) does not display the processed video.

The ideal scenario would be if I could get a processed version of the video image delivered to the FRCVision video that is displayed when you open the the video stream under “Vision Settings”.

Is this possible?


It won’t appear on the vision settings tab (as that only lists usb cameras), but it is still accessible; just add one to the port number (eg 1182 instead of 1181, etc)


Thank you!


Hi Peter:
Things were going fine using this “frcvision.local:1182” port number that you indicated. The original program was being built directly on the rPi basing it on the cpp-multiCameraServer example. However, one of the students has written and tested on his laptop a significant chuck of video processing code in Java. So I switched to the java-multiCameraServer example.

I altered the example code in exactly the same way as I had altered the CPP code. In its simplest form, the java code within “MyPipeline” looks like this.

	public static class MyPipeline implements VisionPipeline 
		CvSource rPiStream = CameraServer.getInstance().putVideo("rPi-R3P2", 320, 240);
		Mat outMat = new Mat();
		public void process(Mat mat) 

When ran the analogous code written in CPP, my camera stream did appear at “frcvision.local:1182”. But when I run the program written in Java as above, I get an error:

frcvision.local refused to connect.

Any idea what could be wrong here?


I just tried this locally from your simplified example and it worked. Note there’s a 5 second delay when starting up the program after uploading, and you’ll get that lack of connection message until the program is actually running. Go to the Vision Status tab, enable the console, and click Terminate to see your program console output as it starts up and runs–are there any crash type things showing up there? Does the “up (pid Y) X seconds” message in the upper right keep accumulating?


Hi Peter:
Wow. Thanks for the fast response!
I’m not sure what happened but I noticed that the time stamp on “runCamera” in the /home/pi/ directory had not been advancing when I changed the “main.java” file and then rebuilt. I deleted “runCamera” from the “java-multiCameraServer” directory and then tried a rebuild. The dump from the build said that it had been successful but the build was not writing a new “runCamera” in the “java-multiCameraServer”.

Knowing that it would not take much to just preserve my “main.java” file and then re-copy the “java-multiCameraServer” files from the original zip directory and start over, I tried that and now it is working.

Thanks for your help anyway. It’s great to have you guys around.


Typically the runCamera script won’t get updated, as it just is a shell script that runs java on your jar. The jar should be getting updated when you upload. Are you building on the Pi or on desktop and uploading via the web dashboard?


Hi Peter:
I am transferring the source files directly to the rPi using an SSH client on my laptop. This gives me a console without having to haul out a monitor and keyboard to plug into the rPi.
So I am building directly on the rPi.
I noticed that the “install.sh” file that they put in the “java-multiCameraServer” directory (and you have to run after each build) contains a line that copies both the jar file and the “runCamera” file. So I was under the impression that the “runCamera” file contained something that might change from build to build.


Ah, okay. Yeah, I made the on-Pi install script copy both because runCamera is different for different languages (as well as the default program), but it won’t change from build to build. I do a lot more testing of the off-robot build and upload-via-web process, but the idea is for both to work.


Got it.
Well, thanks again.
I don’t know where you are located but I’m in Toronto and it’s almost midnight here. I think I’m going to call it a night.