View Single Post
  #18   Spotlight this post!  
Unread 12-10-2014, 21:02
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by marshall View Post
For all I know FIRST is going to drop vision entirely... I mean, cameras don't work under water do they?
AUVs (autonomous underwater vehicle) are gradually developing vision systems. A big problem is correcting the color distortion from the water. A good friend of mine is working in a lab at Cornell and is detecting, and retrieving, different colored balls at the bottom of a swimming pool.

The task of finding your local position (aka GPS-denied) becomes exponentially more complex when you do it in 3 dimensions (think quad-copters or AUVs).

Quote:
Originally Posted by yash101 View Post
The greatest problem with the Kinect was getting it to work. I have never succeeded in opening a kinect stream from OpenCV!

The depth map of the Kinect is surprisingly accurate and powerful!

Next, perform a GaussianBlur transformation. Play around with the kernel size until the target is crisp and clear!

Hunter mentioned to me, last year, that when at the competitions, as soon as possible, ask field staff if there will be time where you will be able to calibrate your vision systems!
Over half the battle is getting everything to work, in my opinion. You have to compile source code and sometimes change cmakelists (if you want to compile opencv with openni).

For those of you interested in what the depth map looks like for the kinect: depth map

You can do a lot of cool things with a depth map, but that's for another discussion.

I personally am not a fan of blurring an image unless I absolutely have to, or if my calculation requires a center and not corners of a contour.

You should be asking when you can calibrate vision to the point that it is borderline harassment until you get an answer. A lot of venues are EXTREMELY poor environments due to window locations, but there isn't much you can do about it. As an example: uhhhh
By lunch on Thursday, I got it working like it did in stl:stl

Here is a short "video" me and a student made during calibration at stl:videoWe tweaked some parameters and got it to work nearly perfectly. As you can guess, we tracked the tape and not the leds for hot goal detection. I somewhat regret that decision, but it's whatever now.

final
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."

Last edited by faust1706 : 12-10-2014 at 21:20.