Quote:
Originally Posted by MrRoboSteve
Our team is wanting to get serious about vision this year, and I'm curious what people think is the state of the art in vision systems for FRC.
Questions:
1. Is it better to do vision processing onboard or with a coprocessor? What are the tradeoffs? How does the RoboRIO change the answer to this question?
2. Which vision libraries? NI Vision? OpenCV? RoboRealm? Any libraries that run on top of any of these that are useful?
3. Which teams have well developed vision codebases? I'm assuming teams are following R13 and sharing out the code.
4. Are there alternatives to the Axis cameras that should be considered? What USB camera options are viable for 2015 control system use? Is the Kinect a viable vision sensor with the RoboRIO?
|
I don't think that most teams fail at vision processing because of any of the items listed. FIRST provides vision sample programs for the main vision task that generally work well. Here's what I think teams need to work on to be successful with vision processing:
- You need to have a method to tweak constants fairly quickly, to help with initial tuning and also to tweak based on conditions at competition.
- You need to have a method to view, save, and retrieve images which can help tune and tweak the constants.
- You need to have a way to use the vision data, for example accurately turn to an angle and drive to a distance.
- You need to understand exactly what the vision requirements are for the game. Most of the time, there are one or more assumptions you can make which will greatly simplify the task.
As for your third question, Team 341's 2012 vision sample program is probably the most popular:
http://www.chiefdelphi.com/media/papers/2676
As for us, we've used LabVIEW/NI Vision on the dashboard PC. This makes it much easier to tweak constants and view and save images.