View Full Version : Teams that have used a Raspberry Pi
Hello, we would like to know if any other teams have used the Raspberry Pi and connected it to the RoboRio? We were trying to use the Raspberry Pi Camera to handle vision processing.
JohnFogarty
28-10-2016, 20:24
I wasn't the lead on our RPi Vision project, I'll have him come in and help you when he gets back from his vacation, but I recommend you look at our GitHub as a place to start. We implemented a vision system on our RPi 2 with a PiCam using Network Tables for variable communication to the RoboRIO.
https://github.com/GarnetSquadron4901
https://github.com/GarnetSquadron4901/rpi-vision-processing
Billfred
28-10-2016, 20:38
I wasn't the lead on our RPi Vision project, I'll have him come in and help you when he gets back from his vacation, but I recommend you look at our GitHub as a place to start. We implemented a vision system on our RPi 2 with a PiCam using Network Tables for variable communication to the RoboRIO.
https://github.com/GarnetSquadron4901
https://github.com/GarnetSquadron4901/rpi-vision-processing
Adding onto this: the stock Pi was not designed to be absolutely beat to hell, as robots did to it in FIRST Stronghold. 4901's Pi's microSD slot wouldn't retain the card by the end of its first event, at which point we abandoned it because of other priorities. Plan for some mechanical protection and shock relief when you mount it; at the minimum, make it hard for the power and microSD connections to fail.
billbo911
28-10-2016, 20:49
We are currently using a Pi on both our bots. We run OpenCV with a Microsoft web cam. We send target data to the Rio via UDP. Works great!
ok thank you also have you guys had problems with communication lag between the systems
beijing_strbow
29-10-2016, 00:23
5968 spent quite a while developing an image processing algorithm to run on a Raspberry Pi (only to abandon it). I did most of the serial communication between the pi and roboRIO, and somebody else did the actual image processing algorithm.
Based on my experience, I wouldn't recommend it. First, the Raspberry Pi's built-in camera interface library doesn't support sending the image over serial as far as I could tell (which would have been slow anyway - we could have used USB, but still, going from PiCamera to serial is not very nice). This becomes an issue if you decide you want to stream the video to your driver station.
Also, it took upwards of 15 seconds to run the processing algorithm on the Pi. This could have been inefficiencies in the algorithm, but it ran in 3 or so seconds on a laptop.
The serial communication itself worked well though, once we got it working.
Perhaps our issues were a result of us trying it in our rookie year, which was probably a bit ambitious. I also haven't tried using a USB camera directly with the roboRIO, though we are going to experiment with that in the next few weeks.
Ben Wolsieffer
29-10-2016, 00:53
Also, it took upwards of 15 seconds to run the processing algorithm on the Pi. This could have been inefficiencies in the algorithm, but it ran in 3 or so seconds on a laptop.
15 seconds to process a single frame! If so, there was definitely a problem with your algorithm. I have run a fairly complicated (non-FRC) vision algorithm on a RPi 2 at ~15 fps.
billbo911
29-10-2016, 11:01
15 seconds to process a single frame! If so, there was definitely a problem with your algorithm. I have run a fairly complicated (non-FRC) vision algorithm on a RPi 2 at ~15 fps.
Agreed!
When we run our processing loop as a single thread, we average better than 40 FPS when using a (320 X 240) matrix. If we run it in a multi-thread, we can get better than 60 processed frames a second. The RPi has plenty of power for this approach.
This may help and I would suggest contacting the team for further help via the website. www.prhsrobotics.com
https://www.youtube.com/watch?v=ZNIlhVzC-4g
This was a TE Session at Georgia Tech that the programmers held.
Clem1640
01-11-2016, 06:05
We use a Raspberry Pi for vision processing. This was very successful this year. I will ask the project lead to join this dialogue.
One of our students wrote a good generic goal finder back in 2013 and we've used it with only minor tweaks since (except 2015 when the only vision targets were in the wrong place): https://github.com/frc3946/PyGoalFinder.
We've found it useful to use separate cameras for targeting (higher resolution, lower brightness) and vision (lower resolution/higher frame rate over the network).
Our entire control board for Stronghold was "shock mounted". Holding the board in place with bungee cord was originally a prototype hack, but after a few dozen times over the rock wall, we enhanced rather than replaced it.
I'm lead on the 4901 vision project this year.
Take a look at the repository that John linked above. Instructions are in the repository on how to install it. I'll have to do some updates to the Network Table portion due to updates to the pynetworktables from this past week.
We used this 'hat' for the Raspberry Pi: https://www.amazon.com/Pi-Screw-Terminal-Breakout-Raspberry/dp/B016757GVW/ref=sr_1_20?ie=UTF8&qid=1478001403&sr=8-20&keywords=raspberry+pi+breakout
Unfortunately, it looks like it's no longer made, but any breakout hat would work. We used the proto area for a logic level converter.
One unique aspect to our vision system is our LED ring was programmable via the Raspberry Pi. The LED ring's logic works at 5V, whereas the RPi operates at 3.3V. This is where a simple logic level converter comes in handy.
I recommend taking a look at Sparkfun's logic level converter. https://www.sparkfun.com/products/12009
I had some spare BSS138 N-Channel MOSFETs from a different project that I made work with the through-hole mounting, however, you can find many available. You want to find one with a low enough gate-source voltage (Vgs) to turn on at 3.3V. The BSS138 turns on at 1.5V, meaning that it will work for 1.8V and 2.5V devices as well, as Sparkfun says.
I'm driving the WS2812 LEDs. I recommend getting genuine Adafruit ones. I tried some cheaper ones from eBay and had overheating issues running them at 5V at their highest brightness. To put this into terms, I had 3 cheap ones fail (usually one LED goes out, which causes the rest of the chain to stop working). I haven't had an Adafruit Neopixel ring (https://www.adafruit.com/products/1586) fail yet.
Adding onto this: the stock Pi was not designed to be absolutely beat to hell, as robots did to it in FIRST Stronghold. 4901's Pi's microSD slot wouldn't retain the card by the end of its first event, at which point we abandoned it because of other priorities. Plan for some mechanical protection and shock relief when you mount it; at the minimum, make it hard for the power and microSD connections to fail.
I bet if you were willing to permanently affix the sd card in it's slot somehow (whether with a bracket of some kind or like just epoxying it in place) you could just use SSH or command over the gpio to update the software when needed.
I bet if you were willing to permanently affix the sd card in it's slot somehow (whether with a bracket of some kind or like just epoxying it in place) you could just use SSH or command over the gpio to update the software when needed.
Sounds like a job for hot glue to me.
Conor Ryan
07-11-2016, 11:38
Running GRIP (Graphically Represented Image Processing engine) on Raspberri Pi 2: https://github.com/WPIRoboticsProjects/GRIP/wiki/Running-GRIP-on-a-Raspberry-Pi-2
marshall
07-11-2016, 12:47
Running GRIP (Graphically Represented Image Processing engine) on Raspberri Pi 2: https://github.com/WPIRoboticsProjects/GRIP/wiki/Running-GRIP-on-a-Raspberry-Pi-2
Thank you!!! I'm going to start referencing this with teams when they ask me about Jetson stuff. I always tell them to start with GRIP but this might give them that "cool factor" of running a second processor along with the ease of GRIP.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.