Raspberry Pi Vision Questions

I have a lot of questions but I will start with building the java sample.
When I run gradlew build I get the following error pictured below.
I have tried this on two different computers. Maybe I am suppose to add something to the Main.java?
I haven’t made any changes at all which is probably wrong.

1 Like

You have Java 8 installed. Java 11 is required to build programs this year.


I guess I thought since VS Code installed everything it needed for java this year that I was all set. I still need to install Java 11 on the system for this to work?

That did it. I setup Open JDK 11 and the build worked fine.

1 Like

So now when trying to ./runInteractive I get this. It just hangs at the last error.

Yes, by default, a Java 11 system install is still required to build the FRCVision examples, as we don’t have tight integration between these examples and the FRC installed vscode and JDK. However, you should be able to use the FRC installed JDK by running frcvars2019 to set the PATH and JAVA_HOME to the FRC installed JDK.

I did get past the Java 11 issue. I have OpenJDk 11 working. The next issue is on the PI itself. It seems to be having an issue connecting to the RoboRio even though everything else is working fine on the robot including getting the sample camera stream in smart dashboard from the PI

It’s not hung, it’s connected to both NT (“NT: client: CONNECTED to server port 1735”) and the camera, so by default there’s not additional text output unless you add your own print statements.

what are the errors? And why does it never go back to a prompt?

It’s still running (runInteractive doesn’t place the process in the background; you can do a Ctrl-C to terminate the process). The errors are because the NT client tries to connect on a bunch of different addresses, and an “error” is reported for each address it couldn’t connect on.

I guess I was under the impression that this runs once and sets some things up on the PI that normally aren’t setup then it would exit. Is this something that needs to be run and stay running?

Yes, image processing / camera server needs to stay running (similar to a robot program). The runInteractive command is there so you can run it interactively for debugging. However, if you reboot the Pi, it runs the same command runInteractive does in the background for you automatically. The console output of the program will be visible in the web dashboard “vision status” tab (there’s a switch for enabling console output that you have to turn on to see the output).

Ok so when you upload the couple files you run this but you could actually just reboot the PI?
Maybe just a bit more information on the process in the Readme would be helpful. I just assumed since the last line was showing an error and it never went back to a prompt that it hadn’t worked.

Agreed the Readme needs updating / more detail. I have an issue open on GitHub to update the example readme files and plan on doing that later this week. It’s not yet mentioned in the readme, but if you use the web dashboard to upload your program, it will actually take care of restarting it in the background for you. Documentation for that is here: http://wpilib.screenstepslive.com/s/currentCS/m/85074/l/1027798-the-raspberry-pi-frc-console#application

Thank you for your help so far. I’m sure myself or some of our team will have more questions. We are a second year team that didn’t even look into vision last year as it wasn’t really needed. (at least for us)

Hey Peter,

I suppose I’m having trouble understanding the documentation, but if I wanted to run a completely separate python script as my vision code, how would I go about setting it up?

Another way of wording my question: If I select ‘Custom’ as my vision program, what happens? What script is executed? On the fresh image, it seems to point back to the multi-camera (multiCameraServer.py) script, is there any way I can change that?

As a side question (granted, I have not dug through the source yet) could you point out the location of the pi-side webserver code? I’m interested in how commands on the dashboard are interpreted.


To run a custom Python script, on the Application tab of the webdash, select “Uploaded Python file” and then click Browse… and select your .py file. Note the first line of your .py file needs to be something like “#!/usr/bin/env python3” so it can run as a script. Click Save, and your Python code will start running. There are a couple of bugs with it at the moment (see #46 in the GitHub repo), namely you need to make sure that the Python file you are uploading uses Unix end-of-lines (plain \n) rather than Windows (\r\n). Also, prints from Python are buffered. Both of these issues will be fixed in the next release.

I would recommend you start with the Python example program, so your code interoperates with the webdash Vision Settings tab. To download it, on the Application page of the webdash, click the “Download Python Example” button. This will download a .zip file. Extract that zip file. Edit the .py file to your liking.

The “Custom” selection is intended for people who just want to completely do their own thing with the runCamera script.

The code for the webdash is here.


Thank you so much for your help!

When is the next release planned?


“Soon” :slight_smile:

Should be sometime this weekend, or possibly as early as Friday night.

1 Like

I’m porting some of my C++ openCV code from my desktop to the Raspberry PI image. I can’t find the opencv headers.

Do I need to add more to this image to compile openCV C++?

The header I’m looking for is:
#include “opencv2/core/core.hpp”