Hi, our team this year has been trying to get grip running to be able to track the goals in autonomous. We have recently encountered an error with the roborio running out of memory when trying to deploy grip with code already on the robot. We are trying to look into getting some raspberry pi to be dedicated to running grip. Here are some of my questions:
a. What are the steps involved in getting the software onto the raspberry pi?
b. How exactly do you run grip once on the raspberry pi?
c. How exactly do you connect a raspberry pi to the roborio?
d. Will the camera be connected to the roborio or the raspberry pi itself?
e. If we run grip on the raspberry pi, will we still be able to see raw camera feed in the dashboard?
f. ^Can we see camera feed that has been filtered by grip on the dashboard?
These are just some of the questions that I can think of right now. I will be working on this in the shop this afternoon so any help would be greatly appreciated.
We’ve been trying to figure this out too, but, as far as I know, no one has gotten this to work yet. If you have, please tell the rest of us. Plenty of teams have these exact questions and no one seems to know how to do it.
Currently, the closest I’ve seen to an answer is this.
New to GRIP, so take the following with a grain of salt. Don’t follow the “I read it on the internet, so it must be true…‘Bonjour’”.
Ok, we are also trying to get GRIP working on the Pi. It’s a non-trivial task due to small differences in the processors. They are making good progress on in this Github thread https://github.com/WPIRoboticsProjects/GRIP/issues/366
We are taking a 3 prong approach…
Team 1. Grip on Driverstation, this works and allow us to start development.
Team 2. OpenCV on Raspberry Pi, and use pyNetworkTables for communication. This seems to be more documented.
Team 3. Compile Grip for the Pi. This is goal of the thread above. We’ll have to have some of our mentors with linux compiling background work on this.
It’s our hope since all 3 methods use NetworkTables, that our code won’t change as we swap out the vision processing between the 3 options.
Can you see the Pi camera feed on DriverStation?
TLDR : No
Grip has been upgraded to publish the video stream, but it’s my understanding that in driverstation you’re not able to select where your feed comes from. So it’s a feature that needs to be added to DS, then it would all work.
I think once they get it working (which I think they’re close), it might be cool to schedule an IRC chat build it all together over a couple of nights. A key to this is we have to used the exact same linux distro and all using the Pi2. Mixing co-processors could cause minor differences in the installation.
b. See a
c. You have a few choices. I’m going to push our students to use Network Tables. Other options could be through the network view UDP or TCP, Serial, I2C and SPI as a list from easiest to most difficult.
d. Raspberry Pi
e. I think I read somewhere that you can publish the stream from the Pi to the dashboard. I can’t find that now.
f. If e works, I would assume so.