View Single Post
  #8   Spotlight this post!  
Unread 05-06-2013, 12:14
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: Using a Raspberry Pi for camera tracking

Quote:
Originally Posted by Hjelstrom View Post
You can also check out this paper: http://www.chiefdelphi.com/media/papers/2698?
I think if you're going to use the Kinect, you should use its depth sensor rather than just using it as an IR camera. The depth sensing it does is incredibly powerful (though it has some quirks too).
This is a great link! and I'd like to highlight something you said from it here:

"
Your team's vision system really inspired us to take another look at vision too though. Using the dashboard to do the processing helps in so many ways. The biggest I think is that you can "see" what the algorithm is doing at all times. When we wanted to see what our Kinect code is doing, we had to drag a monitor, keyboard, mouse, power inverter all onto the field. It was kind of a nightmare.
"

From our experience seeing the algorithm is so important... like when tuning the thresholds dynamically. We also wanted to capture some raw video and do offline testing and tweaking of the footage to fix bugs in the algorithm code (and to improve it, by eliminating more false positives).

I think the ability to see the algorithm is one valid argument to the question "I was wondering if anyone has tried doing that or if it's a good idea or not."

The only drawback with dashboard processing is bandwidth using mjpeg. If you want 640x480 resolution using default settings it costs about 11-13mbps. For this season we are capped at 7mbps and anything above 5 starts to introduce lag (as written in the fms white paper). We are looking into using h264 solution that gives 1.2 for good lighting - 5 for poor lighting using full 640x480 quality. This will roughly yield a 5ms latency, which should be plenty fast for closed loop processing. If more teams start to use vision for next season, we should all really want to encourage all teams to use lower bandwidth so that controls will continue to be responsive (i.e. everybody wins).

Last edited by JamesTerm : 06-06-2013 at 10:09.