|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
||||
|
||||
|
Vision Processing - Questions
Hello everyone,
We want to start experimenting with Vision Processing with the Raspberry Pi, OpenCV, and a USB camera. We use Labview to program our robot, but I know C++ enough to work with some OpenCV libraries. I have a couple of questions that I couldn't find the answers to in my search:
Thanks, Joe Kelly |
|
#2
|
||||
|
||||
|
Re: Vision Processing - Questions
Quote:
As long as you are using a Pi 2, you should be good. In fall 2013 I did alot of camera testing, and got about 9 fps out of a Pi 1, 16 out of a BeagleBone black, and a Pi 2 should be much quicker then the BBB, so it should get in the 25 or so range. I've always found that OpenCV is much easier to work with then NIVision. In addition, since its not running on the RoboRIO you can have the RoboRIO only doing robot code. Having the processor on the robot is better then on the driver station IMO because I've seen FTAs request that teams turn off their dashboards to ease up field traffic, and if your processor is on board even if they ask you to do that you don't lose tracking. |
|
#3
|
|||
|
|||
|
Re: Vision Processing - Questions
I'll suggest that you to run the existing vision examples and try the vision processing on the roboRIO. The roboRIO is many times faster than the PI2 and doesn't require you to coordinate several processors, different implementations of network tables, etc. Once you have some experience with that, you can run a similar experiment on the PI2.
Also, keep in mind that fps is not necessarily a good measure of how well a vision system is working. Most of the time, it is a good idea to use the camera as a slow sensor and close the loop on trajectory or orientation using a different sensor. Greg McKaskle |
|
#4
|
||||
|
||||
|
Re: Vision Processing - Questions
Quote:
Also how is the RoboRIO (dual core ~700 Mhz Armv7) much faster then a Pi 2 (quad core 1000 Mhz Armv7)? |
|
#5
|
|||
|
|||
|
Re: Vision Processing - Questions
The camera drivers or IP cameras may indeed send more images than you asked for. The LabVIEW WPILib implementation consumes all of those images and hands the use code the latest when they ask for it. If you want to process 10 per second, it takes in 15 and hands out ten. This avoids introducing lag. I'm not sure if WPILib does this for other languages.
As for the processor comparison. I used the wikipedia page for "instructions per second" and looked up Cortex A9. I divided by two to adjust for the processor speed. Then I looked up PI 2 and adjusted for the number of cores I thought it had. I'll be honest, I don't have a PI and thought it had two cores and didn't verify my assumption. Even with that math, the ratio is less than 2 to 1, so I shouldn't have said "many times faster". I still think that the OP will be well served by looking at the examples and trying things on a single computer, single language, and doing the control using additional sensors. If they then feel the need to elaborate the system with two of everything, they are better prepared for the journey. Greg McKaskle |
|
#6
|
|||||
|
|||||
|
Re: Vision Processing - Questions
I recommend never sending the full images to the 'RIO at all. Use the pi (open CV is a good way to go) to process the images down to the key pieces of info that you need and send those to the roboRIO. If you want a visual for the driver, you can get CV to generate "schematics" of the full images and send these reduced images to the driver station using much less bandwidth (and/or higher frame rate) than if you send full images. The Canny edge detector is a great way to reduce an image to its key elements.
|
|
#7
|
||||
|
||||
|
Re: Vision Processing - Questions
Quote:
Quote:
Quote:
Quote:
Thanks everyone! |
|
#8
|
||||
|
||||
|
Re: Vision Processing - Questions
Quote:
However, if you are undeterred and wish to forge ahead and you want to send data between two systems then I would look at Network Tables (which has been heavily revised for this year) and I would possibly look at ROS. I2C and serial are also possibilities. You could also just send raw streams but there are bandwidth limitations and port restrictions to keep in mind (Read the rules). Our team (900) has been doing a lot with vision processing. Last year we included an Nvidia Jetson TK1 on our robot that used a webcam and OpenCV to process the data. We used NetworkTables to send the data between the Jetson and the RoboRIO. We also proudly program our RoboRIO in LabVIEW. EDIT: One more point to make. You don't always NEED to process video. Sometimes a single image or a set of images will work just fine. Single frames are minuscule in comparison to video streams and a lot faster to process. For instance, this auto aim was done using single frame captures: https://www.youtube.com/watch?v=QT2OmzrAhPI Last edited by marshall : 12-12-2015 at 12:24. |
|
#9
|
|||
|
|||
|
Re: Vision Processing - Questions
Do you remember which examples? And what image size and results?
Greg McKaskle |
|
#10
|
||||
|
||||
|
Re: Vision Processing - Questions
We just played around a bit with the NIViosion examples in Labview (Tutorial #8 in the "Tutorials" tab in Labview). I don't really remember the specifics, but I just remember we were having trouble calibrating for the reflections of the plexiglass backs at competitions. I know a second board might not be the way to fix that, but I would also like to just tinker and experiment with a second board anyway. :-)
|
|
#11
|
|||
|
|||
|
Re: Vision Processing - Questions
FRC is a great place to experiment. If you have issues, PI or roboRIO, please post info and ask questions. Lots of folks will learn from it. And good luck.
Greg McKaskle |
|
#12
|
|||
|
|||
|
Re: Vision Processing - Questions
we were getting 15 fps from 2 separate cameras at 640p on the odroid u3 in 2014 (were obviously not used much after the little FMS issue was found). I believe the company that makes them has moved on to make cheaper and more powerful alternatives now.
You really can get a lot more out of doing vision processing when running it on an independent board. Not only can you get better framerates, but you can test your code without using the roborio, and with a proper monitor. |
|
#13
|
|||
|
|||
|
We used a Jetson TK1 instead of a Pi last year since the code running on the Pi was a bit too slow.
While other have been talking about network tables, I would simply recommend serial. We used both serial ports for different purposes last year, remember that there is one on the case of the RIO and one in the MXP port. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|