Sending data from GRIP to robot code

Hello there, we are trying to do a vision processing system with a coprocessor(raspberry pi). Our pipeline it’s already done, it’s simple, it just detect blue balls. the problem is that the pipeline doesn’t send the values to the robot code with the network tables. do someone knows how to send those values to the robot code using the network tables with grip?

1 Like

What language did you export your pipeline in? If it’s Python, follow Using NetworkTables — RobotPy 2021 documentation. If it’s C++ or Java, follow What is NetworkTables — FIRST Robotics Competition documentation.

With that said, GRIP is no longer maintained, and the code it generates isn’t guaranteed to work with OpenCV 4.5.2. I suggest using PhotonVision on your pi instead. Their colored shape support is probably what you want.

I’m using java.
do photo works as grip?
can photo vision send the vision information to the robot code??
if I use GRIP my vision system won’t work?

No, it doesn’t offer drag-and-drop pipeline creation. Limelight’s software does, but you can’t run that software on your hardware. In my opinion though, you shouldn’t be making vision pipelines yourself anyway.

Yes. See PhotonLib: Robot Code Interface - PhotonVision Docs.

That’s not what I said.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.