|
Re: Camera followiing
I didn't work in C++, we worked in Java. But this is a very helpful thing to know how to do before the season starts, so good for you!
The last few years they have been using retro-reflective tape as the vision target, which I would hope they continue to do as that stuff is really amazing. You need a light source coming from the camera (Ring lights work well, AndyMark has them via FIRSTChoice, and they can be purchased from superbrightleds.com), and the camera needs to be calibrated (low exposure was the biggest help for us), but once it is, the object will stick out like a sore thumb.
One thing to consider is that image processing takes a lot of time for the cRio, which can slow down (and even stop) the rest of the robot. I would recommend looking into using Network Tables to do the vision processing on your driver station. We did not personally do this as we only used automated tracking in autonomous (where the timing was less important, as the programmer has control of when everything runs), but many teams did use these with great success. The basic principle is that you send the image to the driver station, and it parses the image for the value corresponding to the x and y position of the center of the target. These values are returned to the robot and the robot acts based on those values (e.g. turn right, turn left). I do not know how to implement this though. I'm sure someone else on these forums does though.
Good luck!
|