Vision Processing with a Raspberry Pi

Hello,
I am a programmer in 5549,
We want to use vision processing with our robot. The way we plan to do this is to use GRIP and have the Raspberry Pi handle the vision processing. We’ve already had GRIP generate code for our purposes, but we don’t know where to go next. The only thing we know is that we need Network Tables. How do we use them to get the robot to respond to its vision?

I’m not sure what GRIP generates as far as python code these days, but to acquire images for processing you can use cscore. See the documentation: http://robotpy.readthedocs.io/en/stable/vision/other.html

Once you have some information about the image that you want to transmit to your robot, that’s where networktables comes in. The documentation for pynetworktables is helpful there too: http://robotpy.readthedocs.io/en/stable/guide/nt.html#theory-of-operation

Additionally, both robotpy-cscore and pynetworktables have example programs available on github that you can browse through.