Vision on Raspberry Pi - Starting out

I have started testing running vision processing on
a Raspberry Pi 3 running Raspbian using Java.
Camera is a Microsoft USB LifeCam

At this time I do not have access to a RoboRio.

Started with the 2017 FRC Control System Vision processing
document on the web site.

I got the code compiled on a Ubuntu system transferred to
the Pi and it ran !! At least I think so the blue light on the
camera came on.

In the doc it states
“This sample program only serves video as an http mjpg stream to a web browser or your SmartDashboard and serves as a template.”


  1. How can I access the stream from the web browser ? Does the browser
    have to be running on the Pi or one from my Ubuntu system.

Next step is to take the code generated from GRIP and run it on the Pi.
This would include creating the Network Tables.

How can I access the Network Tables from the Ubuntu system to
see the values and if the change when I move the target?

Finally is there a RoboRio simulator?

Thanks in advance to those who answer.

  1. Visit the raspberry pi’s ip address with the appropriate port number (assigned in the code). This can be done on the pi itself, or any system connected to the same network (such as your Ubuntu system).


  3. None that are functional at the moment, AFAIK.

As long as your Ubuntu machine has an mDNS handler installed (such as avahi-daemon), you should be able to access the stream at raspberrypi.local:<port>/?action=stream. The default port for a camera stream is 1180.

Thanks all for your help.

There are a lot of people who know more about vision than me on here, but speaking as someone who has worked on getting a vision code following a similar path to you last season, I would highly recommend using a different camera. We switched to a pi camera this off season and got our code to work instantly. Being able to alter the camera settings on the pi camera really helped us to consistanly find the target. Good luck!

We ran the Life Cam on the RPi3 last year (we actually ran two). Yes, it was a challenge to find the correct parameters to lock the brightness at an appropriate setting for vision, but we eventually did that and had reasonable success tracking.