View Single Post
  #1   Spotlight this post!  
Unread 04-08-2014, 11:45 PM
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,069
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Team 254 Presents: CheesyVision

Like many teams this season, Team 254 was surprised when we got to our first competition and found out that the Hot Goal vision targets were not triggering right at the start of autonomous mode. There seems to have been some improvements through the weeks, but there is still anywhere from 0.5 to 1.5 seconds delay.

We originally had planned on using a sensor on board the robot - an infrared photosensor from Banner - but our problem was that (a) you can't move the robot until the hot goal triggers or you'll miss the target and (b) it meant our drive team spent a lot of time lining up the sensors to be juuuust right (as Karthik and Paul often pointed out at Waterloo). Onboard cameras may be more tolerant of movement, but introduce new hardware and wiring onto the robot.

We were intrigued by the Kinect, but thought: Why use the Kinect when our Driver Station already has a built-in webcam?

Introducing CheesyVision, our new laptop-based webcam system for simple gesture control of our robot. 254 ran this software at SVR and drove to the correct goal every single time. In eliminations, we installed it on 971 and it worked perfectly, as well. We wanted to share it with all of FRC prior to the Championship, because we think that just because the field timing issue will probably never be perfect this season, nobody should have to suffer.

CheesyVision is a Python program that runs on your Driver Station and uses OpenCV to process a video stream from your webcam.

There are three boxes on top of the webcam image:

-A calibration box (top center)
-Two boxes for your hands (left and right)

Basically, if the left and right boxes are similar in color to the calibration box, we assume your hand is not there. Before each match, our operator puts his hands in the left and right boxes, and then drops the one that corresponds with the goal that turns hot. The result is sent over a TCP socket to the cRIO - example Java code for a TCP server and robot that uses this data in autonomous mode is provided, and doing the same thing in C++ or LabView should be easy (If you implement one of these, please share it with the community!).

There are tuning instructions in the source code, but we have found the default settings work pretty reliably under most lighting conditions as long as your shirt and the color of your skin are different enough (because the algorithm is self-calibrating). Of course, you could use virtually anything else besides skin and clothing if the colors are different.

Here are some screenshots:





To download and install the software, visit:
https://github.com/Team254/CheesyVision

Good luck!

Last edited by Jared Russell : 04-08-2014 at 11:48 PM.