We want to start experimenting with Vision Processing with the Raspberry Pi, OpenCV, and a USB camera.
We use Labview to program our robot, but I know C++ enough to work with some OpenCV libraries. I have a couple of questions that I couldn’t find the answers to in my search:
How can I send image data from the Raspberry Pi to the RoboRIO (which is running Labview)?
Is the Raspberry Pi even a good choice?
Is OpenCV a better choice than the Labview vision vi?
You can use NetworkTables to send image data from the Pi to the RoboRIO. You would have to compile the ntcore library for Raspberry Pi, which isn’t too hard to do, and then just link to that. You can also build a custom communication interface, which shouldn’t be too hard, but NetworkTables is the easiest.
As long as you are using a Pi 2, you should be good. In fall 2013 I did alot of camera testing, and got about 9 fps out of a Pi 1, 16 out of a BeagleBone black, and a Pi 2 should be much quicker then the BBB, so it should get in the 25 or so range.
I’ve always found that OpenCV is much easier to work with then NIVision. In addition, since its not running on the RoboRIO you can have the RoboRIO only doing robot code. Having the processor on the robot is better then on the driver station IMO because I’ve seen FTAs request that teams turn off their dashboards to ease up field traffic, and if your processor is on board even if they ask you to do that you don’t lose tracking.
I’ll suggest that you to run the existing vision examples and try the vision processing on the roboRIO. The roboRIO is many times faster than the PI2 and doesn’t require you to coordinate several processors, different implementations of network tables, etc. Once you have some experience with that, you can run a similar experiment on the PI2.
Also, keep in mind that fps is not necessarily a good measure of how well a vision system is working. Most of the time, it is a good idea to use the camera as a slow sensor and close the loop on trajectory or orientation using a different sensor.
True, as long as you can process faster then the camera sends images. We couldn’t get the axis camera to send less then 15 FPS, so it would actually start lagging and getting further and further behind as more images came in. And we couldn’t figure out a way to make opencv flush the buffer up to the last image.
Also how is the RoboRIO (dual core ~700 Mhz Armv7) much faster then a Pi 2 (quad core 1000 Mhz Armv7)?
The camera drivers or IP cameras may indeed send more images than you asked for. The LabVIEW WPILib implementation consumes all of those images and hands the use code the latest when they ask for it. If you want to process 10 per second, it takes in 15 and hands out ten. This avoids introducing lag. I’m not sure if WPILib does this for other languages.
As for the processor comparison. I used the wikipedia page for “instructions per second” and looked up Cortex A9. I divided by two to adjust for the processor speed. Then I looked up PI 2 and adjusted for the number of cores I thought it had. I’ll be honest, I don’t have a PI and thought it had two cores and didn’t verify my assumption. Even with that math, the ratio is less than 2 to 1, so I shouldn’t have said “many times faster”.
I still think that the OP will be well served by looking at the examples and trying things on a single computer, single language, and doing the control using additional sensors. If they then feel the need to elaborate the system with two of everything, they are better prepared for the journey.
I recommend never sending the full images to the 'RIO at all. Use the pi (open CV is a good way to go) to process the images down to the key pieces of info that you need and send **those **to the roboRIO. If you want a visual for the driver, you can get CV to generate “schematics” of the full images and send these reduced images to the driver station using much less bandwidth (and/or higher frame rate) than if you send full images. The Canny edge detector is a great way to reduce an image to its key elements.
Is there a quick “getting started” guide for setting up network tables in C++ (for OpenCV)? I know where the blocks are in Labview, but I’ve never done them in C++
Awesome! I’ll look into the Pi2 more specifically.
Right, I have heard running vision processing code on the RoboRIO will slow it down. That’s why I was wondering about how to transfer the image data from the board to the RoboRIO.
I’ll suggest that you to run the existing vision examples and try the vision processing on the roboRIO. The roboRIO is many times faster than the PI2 and doesn’t require you to coordinate several processors, different implementations of network tables, etc. Once you have some experience with that, you can run a similar experiment on the PI2.
We have run a few examples already. We thought the results were slow and laggy. So, we wanted to try to find a faster and more reliable solution.
For what it’s worth, Greg’s suggestion is a very good one. Try the examples before running off on the secondary system quest…
However, if you are undeterred and wish to forge ahead and you want to send data between two systems then I would look at Network Tables (which has been heavily revised for this year) and I would possibly look at ROS. I2C and serial are also possibilities. You could also just send raw streams but there are bandwidth limitations and port restrictions to keep in mind (Read the rules).
Our team (900) has been doing a lot with vision processing. Last year we included an Nvidia Jetson TK1 on our robot that used a webcam and OpenCV to process the data. We used NetworkTables to send the data between the Jetson and the RoboRIO. We also proudly program our RoboRIO in LabVIEW.
EDIT: One more point to make. You don’t always NEED to process video. Sometimes a single image or a set of images will work just fine. Single frames are minuscule in comparison to video streams and a lot faster to process. For instance, this auto aim was done using single frame captures: https://www.youtube.com/watch?v=QT2OmzrAhPI
We just played around a bit with the NIViosion examples in Labview (Tutorial 8 in the “Tutorials” tab in Labview). I don’t really remember the specifics, but I just remember we were having trouble calibrating for the reflections of the plexiglass backs at competitions. I know a second board might not be the way to fix that, but I would also like to just tinker and experiment with a second board anyway.
FRC is a great place to experiment. If you have issues, PI or roboRIO, please post info and ask questions. Lots of folks will learn from it. And good luck.
we were getting 15 fps from 2 separate cameras at 640p on the odroid u3 in 2014 (were obviously not used much after the little FMS issue was found). I believe the company that makes them has moved on to make cheaper and more powerful alternatives now.
You really can get a lot more out of doing vision processing when running it on an independent board. Not only can you get better framerates, but you can test your code without using the roborio, and with a proper monitor.
We used a Jetson TK1 instead of a Pi last year since the code running on the Pi was a bit too slow.
While other have been talking about network tables, I would simply recommend serial. We used both serial ports for different purposes last year, remember that there is one on the case of the RIO and one in the MXP port.