Hi Everyone,
We are trying to get vision processing to work with a Microsoft LifeCam using GRIP and running it on the Rio. However, it seems that it either doesn’t work or it doesn’t update enough. Thanks (9.7 KB) (2.2 KB)

What does “it doesn’t work” mean? For technical problems, the more details, the better.

1 Like

The updates very, very rarely. Because of this I can’t tell if the outputs are accurate or not. I’m very new to vision, so please tell me what other information you want.

Unfortunately, the Rio seems inadequate for vision processing. The Rio is older, and takes far longer than to process a frame then a RasPi. I’d say use a RasPi with the official image. We tried that and it didn’t work for us. That and the issue you run into is you can’t move during the time a frame is processing, so driving is not possible

Okay that makes sense. I just followed the instructions on the WPI Lib docs so now the pi is running the frc vision distribution. I have three questions that I can’t seem to figure out. First, how do I publish values to network tables in the pi? I use Java on the robot, but I figured it would be easier to use python on the pi for vision processing. Second, I generated the opencv pipeline with grip (same as above, but I regenerated it in Python), but it seems like it isn’t compatible with vision on the pi. How do I incorporate the pipeline into the pi? Third, how do I send a processed camera feed to network tables? For example, if I performed a blur operation to the camera feed, how could I then output that? Thank you again for your time and patience

Also I’m the op on my teams account. Sorry for the confusion

You may find it easier to just use Java on the Pi as well. The Java multiCameraServer example program for FRCVision has a sample VisionPipeline (that you would replace with your GRIP generated pipeline), and you can use the same NetworkTables API calls you use on the robot to read and write values.

Regarding creating a processed camera feed, you would create another stream using the CameraServer class and feed processed frames to it. See

Thank you so much. That makes a lot of sense. I’ll give it a go at tomorrow’s practice

Back on this account. I’m trying to use the java example but I don’t know how to build it. How do I do so?

The README.txt file in the example explains how to build and upload. To build: open a command prompt in the folder and run .\gradlew build

I would suggest switching to use OpenSight ( It’s a vision system for the Raspberry Pi which supports the Microsoft LifeCam and uses a visual, pipeline format for configuration. You can send data to the Rio using NetworkTables.

Thank you so much. I mostly figured it out. The only thing that I am still confused on is displaying the processed image like you would a camera feed.

See the “Advanced Camera Server” program on the page I linked to. The basic concept is during startup you create a CvSource like this:

CvSource outputStream = CameraServer.getInstance().putVideo("Blur", 640, 480);

This will create a camera stream called “Blur” that you can connect to.

And then during image processing you put a Mat to it like this:


Thank you again so much. One last question, how do you publish values to network tables? I can’t find any documentation on it. Does it involve using Sendeable? Sorry for all the questions

Here are the docs on NetworkTables:

Oh okay, thanks. Is there a way to view network tables without access to a roborio? The pi is directly connected to my computer right now via ethernet. I tried viewing through the Shuffleboard but no data appeared, not even the camera.

Assuming you’re still basing it on the Java example, you can turn off the “Client” switch on the Vision Settings tab to make the Pi act like a NetworkTables server instead of a client. Then you can set up Shuffleboard with the Pi’s IP address instead of the team number.

Everything Works! Thank you so much for all your help