View Single Post
  #2   Spotlight this post!  
Unread 11-12-2015, 20:39
Thad House Thad House is offline
Volunteer, WPILib Contributor
no team (Waiting for 2021)
Team Role: Mentor
 
Join Date: Feb 2011
Rookie Year: 2010
Location: Thousand Oaks, California
Posts: 1,106
Thad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond reputeThad House has a reputation beyond repute
Re: Vision Processing - Questions

Quote:
Originally Posted by jojoguy10 View Post
Hello everyone,

We want to start experimenting with Vision Processing with the Raspberry Pi, OpenCV, and a USB camera.

We use Labview to program our robot, but I know C++ enough to work with some OpenCV libraries. I have a couple of questions that I couldn't find the answers to in my search:
  1. How can I send image data from the Raspberry Pi to the RoboRIO (which is running Labview)?
  2. Is the Raspberry Pi even a good choice?
  3. Is OpenCV a better choice than the Labview vision vi?

Thanks,
Joe Kelly
You can use NetworkTables to send image data from the Pi to the RoboRIO. You would have to compile the ntcore library for Raspberry Pi, which isn't too hard to do, and then just link to that. You can also build a custom communication interface, which shouldn't be too hard, but NetworkTables is the easiest.

As long as you are using a Pi 2, you should be good. In fall 2013 I did alot of camera testing, and got about 9 fps out of a Pi 1, 16 out of a BeagleBone black, and a Pi 2 should be much quicker then the BBB, so it should get in the 25 or so range.

I've always found that OpenCV is much easier to work with then NIVision. In addition, since its not running on the RoboRIO you can have the RoboRIO only doing robot code. Having the processor on the robot is better then on the driver station IMO because I've seen FTAs request that teams turn off their dashboards to ease up field traffic, and if your processor is on board even if they ask you to do that you don't lose tracking.
__________________
All statements made are my own and not the feelings of any of my affiliated teams.
Teams 1510 and 2898 - Student 2010-2012
Team 4488 - Mentor 2013-2016
Co-developer of RobotDotNet, a .NET port of the WPILib.