Hello,
Our programming team is intending on using the GRIP software as a vision engine on our robot for the 2016 competition, with the purpose of sensing the retroreflective tape around the goals to assist in the targeting calibration of a shooter. We have followed the tutorial seen on the WPI ScreenSteps Live website regarding the setup of the GRIP application and the creation of a vision pipeline tailored for the 2016 competition without issues.
We would like to know, since there seems to be a general consensus that the processing capability of the roboRIO is insufficient to handle any OpenCV applications, if there is a viable way to use the Raspberry Pi (version 1 model B) as a vision coprocessor. We were initially assuming that the Raspberry Pi would communicate with the roboRIO through the roboRIO’s RS232 port. Is there a way to configure the Raspberry Pi and roboRIO so that this is possible? Also, is it possible to deploy a GRIP pipeline to the Raspberry Pi, and if so, how would that differ in terms of procedure from the FRC deploy option in the Tools tab of the GRIP application? Could we still communicate with the roboRIO’s network tables by writing a script in Java that would run on the Raspberry Pi?
Thanks in advance.