Camera followiing

Hello CD!
In the downtime while waiting for the competition, our programming team is going to learn to program our cameras (because we’ll have to for the competition anyways). The most interesting project we could come up with
is to utilize a camera to get our robot to track and follow, for example, a person wearing a red shirt, or a specific coloured ball. I’ve been reading through the “Getting Started with the 2012 FRC Control System” PDF and the WPIlib documentation, and haven’t found much information on the topic. If anyone has any ideas, we would love to know about them! Thanks!
~Eric K

I’m not a programmer, so I can’t give you specifics, but teams have had to do just this in several recent games. For example, in Aim High we “had to” track a green light, and in Lunacy we “had to” track a trailer with either red-over-green cloth or green-over-red cloth.

I put “had to” in quotes because it wasn’t mandatory, but better teams could gain a lot from making it work.

So first look back in the archives, maybe something there will help more than the 2012 stuff.

The short version is:
Evaluate the image and find the color you are looking for
Then find the centroid of that color ‘blob’
If the centroid is not in the center, turn the (camera, robot) so that it is.

Good luck!

I didn’t work in C++, we worked in Java. But this is a very helpful thing to know how to do before the season starts, so good for you!

The last few years they have been using retro-reflective tape as the vision target, which I would hope they continue to do as that stuff is really amazing. You need a light source coming from the camera (Ring lights work well, AndyMark has them via FIRSTChoice, and they can be purchased from superbrightleds.com), and the camera needs to be calibrated (low exposure was the biggest help for us), but once it is, the object will stick out like a sore thumb.

One thing to consider is that image processing takes a lot of time for the cRio, which can slow down (and even stop) the rest of the robot. I would recommend looking into using Network Tables to do the vision processing on your driver station. We did not personally do this as we only used automated tracking in autonomous (where the timing was less important, as the programmer has control of when everything runs), but many teams did use these with great success. The basic principle is that you send the image to the driver station, and it parses the image for the value corresponding to the x and y position of the center of the target. These values are returned to the robot and the robot acts based on those values (e.g. turn right, turn left). I do not know how to implement this though. I’m sure someone else on these forums does though.

Good luck!

Thanks for your replies! I still don’t have any idea how to program the camera, but I’m still trying! It is surprisingly difficult to look at code from years past, so acquiring reference code isn’t all that easy :stuck_out_tongue: We’ve been sifting trough the WPI library, but without any examples (except for the 2012 vision demo program that comes with the updates) we don’t know where to begin. We’re still working on the project, and any more help would be great! Thanks!

@ekapalka,

While you didn’t note what language, we’re in the C++ forum so I’ll assume that you are talking C++. Our team (Team 1967 - Janksters) did do vision tracking last year with very good success. Our code is on github and open sourced at https://github.com/bobwolff68/FRCTeam1967 and you’ll find the vision code in a few classes - but the main routine as an example is found at https://github.com/bobwolff68/FRCTeam1967/blob/master/2012/classes/jankyTargeting.cpp#L156

bob

Thank you so much! This is exactly what I was looking for!

@bob.wolff68
By the way, as of 7:07pm (mt), the second link is dead… maybe just me… Thanks!