|
Re: Vision Processing
Quote:
Originally Posted by RamZ
This year, my team wants to use vision processing, specifically using the retroreflective material near the goals to help us aim this year.
Two ways that I've seen this done is using OpenCV, and also with RoboRealm.
Which of these, or any other, vision processing utility would you guys recommend, and why?
We program in Java if it helps.
Also: onboard, offboard, or coprocessor?
Thanks.
|
Performing vision on the Dashboard is very successful, and depending on your laptop you can achieve near 30fps processing. DasiyCV from 2012 is probably then best dashboard vision system I have witnessed. The code was released by Team 341. This may be useful for teams to keep in their codebase. When my teams takes on a new coding challenge, I like the system implemented to be robust enough that it becomes apart of our arsenal we can use year after year. Vision process on the dashboard is a very smart way to go.
However, a valid reason to go with a co-processor onboard the robot is if you may end up at an event where FTA's turn off your dashboard/camera to help reduce network issues.
I know many teams had this happen to them last year, and teams relying on vision processing on their dashboard were forced to play handicapped. This would be avoided if you had vision processing on board, and reduced/removed any streams back to the dashboard. However, you need to provide space/weight/power for the onboard processor. If disabled dashboards at your events may be a concern for you, then I recommend a co-processor. This is the method my team employs. We use a beaglebone white on board, and open tcp sockets between the bone and the crio, no data is sent to the driverstation so we don't have to worry about bandwidth limitations or FTAs shutting down cameras. We can achieve 10fps real-time successfully with FFMPEG, OpenCV, and a 320x240 resolution image from the Axis M1011 camera.
A third option floating around on chiefdelphi that some teams are using is a class 1 lasers detector and aiming it at the hot target, they get a true/false reading in the presents of a hot taget after the match starts. No vision camera required. If it works, it would be a very simple, elegant solution.
Hope this helps,
Kevin
__________________
Controls Engineer, Team 2168 - The Aluminum Falcons
[2016 Season] - World Championship Controls Award, District Controls Award, 3rd BlueBanner
-World Championship- #45 seed in Quals, World Championship Innovation in Controls Award - Curie
-NE Championship- #26 seed in Quals, winner(195,125,2168)
[2015 Season] - NE Championship Controls Award, 2nd Blue Banner
-NE Championship- #26 seed in Quals, NE Championship Innovation in Controls Award
-MA District Event- #17 seed in Quals, Winner(2168,3718,3146)
[2014 Season] - NE Championship Controls Award & Semi-finalists, District Controls Award, Creativity Award, & Finalists
-NE Championship- #36 seed in Quals, SemiFinalist(228,2168,3525), NE Championship Innovation in Controls Award
-RI District Event- #7 seed in Quals, Finalist(1519,2168,5163), Innovation in Controls Award
-Groton District Event- #9 seed in Quals, QuarterFinalist(2168, 125, 5112), Creativity Award
[2013 Season] - WPI Regional Winner - 1st Blue Banner
Last edited by NotInControl : 25-02-2014 at 22:56.
|