For this year's game, it seems to be necessary to locate the backboards programatically in both the "hybrid" and tele-op periods. I found a paper from Team 1511 that helps determine a robot's position relative to a vision target (see:
http://www.chiefdelphi.com/media/papers/2324).
I feel like most of the math and thought behind that paper would translate well to this scenario too. The only problem is finding the rectangles accurately in an image using the camera or Kinect. I was wondering if any teams had any tips on how to do this with the camera, since we haven't really tried camera-tracking since we played with CircleTrackerDemo two years ago. Unfortunately, most of the CircleTrackerDemo code seems specific to ellipses only. Any ideas on how to do rectangles with a camera? Perhaps some code we can use?
If that isn't possible, an alternative would be using the Kinect. Although I'm sort of clueless when it comes to shape tracking (outside of human shapes) with the Kinect.
Thank you for your help, and I appreciate any input you may have.