Raspberry Pi as Vision Coprocessor with GRIP

Hello,
Our programming team is intending on using the GRIP software as a vision engine on our robot for the 2016 competition, with the purpose of sensing the retroreflective tape around the goals to assist in the targeting calibration of a shooter. We have followed the tutorial seen on the WPI ScreenSteps Live website regarding the setup of the GRIP application and the creation of a vision pipeline tailored for the 2016 competition without issues.

We would like to know, since there seems to be a general consensus that the processing capability of the roboRIO is insufficient to handle any OpenCV applications, if there is a viable way to use the Raspberry Pi (version 1 model B) as a vision coprocessor. We were initially assuming that the Raspberry Pi would communicate with the roboRIO through the roboRIO’s RS232 port. Is there a way to configure the Raspberry Pi and roboRIO so that this is possible? Also, is it possible to deploy a GRIP pipeline to the Raspberry Pi, and if so, how would that differ in terms of procedure from the FRC deploy option in the Tools tab of the GRIP application? Could we still communicate with the roboRIO’s network tables by writing a script in Java that would run on the Raspberry Pi?

Thanks in advance.

There’s been some progress on getting a build of GRIP to run on a rapsberry pi.

Can you point me more information? I hadn’t see that yet in the couple active GRIP threads.

Where is the popular place to run it (co-processor, roboio, driver laptop)? My first thought was co-processor, like a pi, to avoid any potential resource conflict (CPU or memory). But then I saw some posts about potential problems to install on a pi. I was avoiding the laptop due to bandwidth concerns.

Thanks.
Brian