Team 254 Presents: CheesyVision

Like many teams this season, Team 254 was surprised when we got to our first competition and found out that the Hot Goal vision targets were not triggering right at the start of autonomous mode. There seems to have been some improvements through the weeks, but there is still anywhere from 0.5 to 1.5 seconds delay.

We originally had planned on using a sensor on board the robot - an infrared photosensor from Banner - but our problem was that (a) you can’t move the robot until the hot goal triggers or you’ll miss the target and (b) it meant our drive team spent a lot of time lining up the sensors to be juuuust right (as Karthik and Paul often pointed out at Waterloo). Onboard cameras may be more tolerant of movement, but introduce new hardware and wiring onto the robot.

We were intrigued by the Kinect, but thought: Why use the Kinect when our Driver Station already has a built-in webcam?

Introducing CheesyVision, our new laptop-based webcam system for simple gesture control of our robot. 254 ran this software at SVR and drove to the correct goal every single time. In eliminations, we installed it on 971 and it worked perfectly, as well. We wanted to share it with all of FRC prior to the Championship, because we think that just because the field timing issue will probably never be perfect this season, nobody should have to suffer.

CheesyVision is a Python program that runs on your Driver Station and uses OpenCV to process a video stream from your webcam.

There are three boxes on top of the webcam image:

-A calibration box (top center)
-Two boxes for your hands (left and right)

Basically, if the left and right boxes are similar in color to the calibration box, we assume your hand is not there. Before each match, our operator puts his hands in the left and right boxes, and then drops the one that corresponds with the goal that turns hot. The result is sent over a TCP socket to the cRIO - example Java code for a TCP server and robot that uses this data in autonomous mode is provided, and doing the same thing in C++ or LabView should be easy (If you implement one of these, please share it with the community!).

There are tuning instructions in the source code, but we have found the default settings work pretty reliably under most lighting conditions as long as your shirt and the color of your skin are different enough (because the algorithm is self-calibrating). Of course, you could use virtually anything else besides skin and clothing if the colors are different.

Here are some screenshots:
http://i.imgur.com/ktUE1rt.png
http://i.imgur.com/XARctmm.png
http://i.imgur.com/UWHum6k.png
http://i.imgur.com/9gRMQFv.png

To download and install the software, visit:

Good luck!

2 Likes

I saw this in person at SVR, and it is very cool. Great job 254, and thanks for sharing!

Now if only someone would use this same technology to block their 3 ball auto…

This is absolutely phenomenal, and since 781 had to remove their camera for weight, I needed a new method for hot goal detection. I have not been this happy about programming for a while–whether this works for us or not, I am incredibly grateful.

Too bad I can’t give rep more than once.

This really is cool. I like the method. The only problem is that Wildstang couldn’t use it :smiley:

In all seriousness, I think this is an excellent way of detecting hot goals. Very simple, and most laptops have a camera on them nowadays. Ill keep it in mind for championships this weekend.

Thank you so much, we were just looking at how to implement our hot goal detection for champs, and this is an amazing solution. We also plan on extending it to tell the robot where to go while blocking during autonomous. Thank you so much for sharing this with the FIRST community!

We currently use the Kinect method, but i might be inclined to implement this instead. I didn’t develop something like this because the kinect Java classes already exsisted and were fairly easy to use. I do like how this required some work though.

Nice work.

I can’t wait to tell the beleaguered crew working on Kinect programming there may be another way!

It is a real shame 254 isn’t using the Kinect after its rousing success with it in 2012.

1 Like

This weekend at the Windsor-Essex Great Lakes Regional I heard of 1559 using a very similar program for their Hot Goal detection. Instead they used cards that had symbols on them, and I believe they had this all season long though I can not confirm. Because of this they won the Innovation and Control Award.

It’s pretty cool seeing that another team came up with a very similar way to detect the Hot Goal.

Good luck at Champs Poofs!

2468 team appreciate used a system like this at Bayou last week. This never occurred to us - it’s so simple and elegant. This will be pretty cool to show kids at demos.

We (1708) used a similar method at both NYTV (we got it working about halfway through the competition) and Pittsburgh (where we won Innovation in Control as well). We used the Fiducial module built into RoboRealm.

I’ve attached our RoboRealm script file for anyone who’s curious. To use, first double click on the Fiducial line in the script, then click the Train button, then click Start. You may need to change the path to the directory that the fiducials are stored in if you’re not on 64-bit Windows or you installed in a non-default directory. You’ll also have to modify the Network Tables configuration to match your team number.

If we can get a more comprehensive paper written on it, I’ll post it on CD.

Nice work, Poofs and Devil-Tech (and others). Cool to see other teams using this method as well.

fiducial tracker.zip (1.02 KB)


fiducial tracker.zip (1.02 KB)

I LOVE IT!!

This year 2073 used a USB webcam on our bot to track the balls. It was implemented to assist the driver with alignment to balls when they were obstructed from his view or just too far away to easily line up.
We won the Inspiration in Control Award at both Regionals we attended because of it. If 254 can share their code, we can share the** Labview receiver** we used to help any team that can take advantage of it.
Set the IP of the receiver to that of your DS, the Port number to that set on line 72 of the code 254 provided, and set number of bytes to read to whatever you are sending. In the case of 254’s code, that should be 1.

A quick look at the block diagram will make it obvious what to do.

Please ask any questions here so I can publicly answer them.

PCDuino Receiver.vi (14.4 KB)


PCDuino Receiver.vi (14.4 KB)

So something that might be helpful to add would be to make it SmartDashboard compatible. That might make it alot more accessible to teams because it can easily be added as just a variable on the dashboard. You can get python binding for SmartDashboard here
http://firstforge.wpi.edu/sf/frs/do/viewRelease/projects.robotpy/frs.pynetworktables.2014_4

I don’t have a CRIO on me, but I attached a version that uses the exact same method of communicating as we were doing earlier in the season, so it should work. It just has two bool variables (right_in and left_in) and should just use the standard smartdashboard VI’s or functions and be compatible with all versions.

EDIT: Attaching the file wouldnt work for some reason. So here is a skydrive link
https://onedrive.live.com/redir?resid=D648460250CFE566!3187&authkey=!AJN3X-AJMsh70Hc&ithint=file%2c.zip

Wouldn’t it be easier just to hold up a light or large board of a specific color to discern between the two?
Basically we are just concerned about answering a boolean question here. As in: “Is the left goal hot?” If no, don’t hold up the board/light and assume the right goal is hot. If yes, hold up your indicator.

Cool – we actually did the same thing, and fed back to the driver station which balls were detected in the field of view, and also their distance + offset angle from our collector. Seemed to work pretty well but when we were out on the field we didn’t use the autonomous control for it just because of the nature of defensive and high-speed gameplay. Unfortunately we didn’t win a Controls award at either of our regionals. Would love to compare code though!

This is kind of like what we did for our hot goal tracking. We lined up our robot up with the middle of the high goal so that it could only see the targets for the side that we were on. The camera looked at the targets for the first second and a half of autonomous. If that side was hot first, the robot quickly drove forward and shot. If the other side was hot first, the robot very slowly moved forward so that by the time it was in position to shoot, the goals flipped.

Well, the problem right now with the high variance in timing of the hot goals,using that kind of methodology leads to the robot assuming a right hot goal while there is no hot goal.
The way 254 has done it, if both hands are in their respective boxes, the robot knows that neither hot goal has lit up, and therefore won’t start it’s autonomous routine until it receives data from the laptop saying that there is a hot goal to shoot balls into.

The goal was to have zero external devices that could run out of batteries, get left in the pit/cart/etc, be dropped, or otherwise malfunction at the worst possible moment. Holding your hands in the box on a static background is a pretty darn repeatable action and satisfied that requirement.

This is easily the simplest most innovative control method this season.
Kudos to whoever came up with the idea and I can see this becoming the standard in subsequent seasons.

Team 3211 The Y Team from Israel did the same thing in the Israeli regional, worked 100% of our matches.
We however used facial recognition libraries, so when the camera recognizes a face, it knows there’s a hot goal in front of it.
We later tried printing pictures of Dean and Woodie to use, but they turned out not 3-D enough for the face recognition…

We were’nt sure if its ‘legal’, so we asked the head ref, who approved the use of that. The only relevant Q&A states that a kinect might be used, we didnt know if a webcam is ok too…

Ill talk to our programs and try having the code here later on, we used Labview on the robot and python with openCV for that image recognition.

Also, we were told by the FTA that he noticed us sending a lot of info through the fields bandwitdh, and that it might cause problems.
We decided to have the drivers shut down the image recognition running at the begining of teleop, to avoid any possible problems or delays (which we didnt have, but just to be sure).

Thanks Poofs! Its an honor seeing that our idea is used by you guys too =]

How/why would this use a lot of bandwidth?