Hi Everyone,
We are trying to get vision processing to work with a Microsoft LifeCam using GRIP and running it on the Rio. However, it seems that it either doesn’t work or it doesn’t update enough. Thanks
The updates very, very rarely. Because of this I can’t tell if the outputs are accurate or not. I’m very new to vision, so please tell me what other information you want.
Unfortunately, the Rio seems inadequate for vision processing. The Rio is older, and takes far longer than to process a frame then a RasPi. I’d say use a RasPi with the official image. We tried that and it didn’t work for us. That and the issue you run into is you can’t move during the time a frame is processing, so driving is not possible
Okay that makes sense. I just followed the instructions on the WPI Lib docs so now the pi is running the frc vision distribution. I have three questions that I can’t seem to figure out. First, how do I publish values to network tables in the pi? I use Java on the robot, but I figured it would be easier to use python on the pi for vision processing. Second, I generated the opencv pipeline with grip (same as above, but I regenerated it in Python), but it seems like it isn’t compatible with vision on the pi. How do I incorporate the pipeline into the pi? Third, how do I send a processed camera feed to network tables? For example, if I performed a blur operation to the camera feed, how could I then output that? Thank you again for your time and patience
You may find it easier to just use Java on the Pi as well. The Java multiCameraServer example program for FRCVision has a sample VisionPipeline (that you would replace with your GRIP generated pipeline), and you can use the same NetworkTables API calls you use on the robot to read and write values.
I would suggest switching to use OpenSight (https://opensight-cv.github.io/). It’s a vision system for the Raspberry Pi which supports the Microsoft LifeCam and uses a visual, pipeline format for configuration. You can send data to the Rio using NetworkTables.
Thank you again so much. One last question, how do you publish values to network tables? I can’t find any documentation on it. Does it involve using Sendeable? Sorry for all the questions
Oh okay, thanks. Is there a way to view network tables without access to a roborio? The pi is directly connected to my computer right now via ethernet. I tried viewing through the Shuffleboard but no data appeared, not even the camera.
Assuming you’re still basing it on the Java example, you can turn off the “Client” switch on the Vision Settings tab to make the Pi act like a NetworkTables server instead of a client. Then you can set up Shuffleboard with the Pi’s IP address instead of the team number.