I’m sorry, this is a REALLY stupid question but I’m going to ask it anyways as I want to understand. What’s the purpose of the reflective tape on the backboards. Is the assumption that we can get a camera on the robot to track our distance and orientation to the backboard with the tape? That’s my assumption but in practice, that’s a very hard problem to solve and my tiny bit of experience with computer vision makes me cringe to try it. The limited processing power available combined with the low resolution of the cameras and “swamping” overhead lights make this seem very nasty.
In theory, you can determine your angle in relation to the backboard as well as your distance by knowing the size of the rectangles in advance and seeing how much they “deform” and shrink. Your distance is determined by how large they are and you angle by how much they have skewed. This seems feasible but my experience with computer vision is with using feducial markers and very short distances. Even a sheet-of-paper-sized marker only works for about 8 feet on my webcam and that’s in pretty good lighting.
Is anyone planning on really trying to do computer vision for those targets? Is there some trick to making them show up better? I’d love anyone’s thoughts on this. Thanks!
Remember that this is supposed to be retro-reflective tape, meaning that light will be reflected back to the source. This means that maybe shining some sort of light may allow you to better pick up the rectangles and distinguish the shapes from the rest of the image. In any case, good luck!
HA! I had something in my post about adding a big IR flood to the robot and using an IR pass filter on the camera to fix the lighting problem. Sounds like I might have been headed the right direction. Thanks!
It was last year, In fact it was specifically suggested at kickoff. I would be inclined to say yes; I saw nothing in the manual specifically barring non-concentrated light sources.
Yup, that’s what I was thinking. Just need to see if the rules allow it. On many cameras it’s as simple as removing a filter. I’ve done it to a pair of webcams in the past and it worked great.
And that’s what scares me. My experience with getting robots to “see” things has been poor consistently. I’m really unsure if it’s worth even the effort. Did any team get vision targets working well?
If you use the example tracker you can easily modify it to your needs. I believe tracking will win or loose the game this year. You either auto track the entire time so you can make baskets 90% of the time or you dont track and maybe get 10% of the baskets. Its going to be a difficult feet for everyone but every year there is a win or loose situation and I believe thats this years.
Personally, we are doing complete auto tracking. Trajectory planning and all in the code. Going for a 80% scoring throw from anywhere on the field. But really this depends a LOT on the mechanical also. They have to get the thrower throwing consistently before I can do any math to predict where it will land.
There should be a white paper on the NI site, but I haven’t been able to find where they put it. Fortunately, Brad also posted it to FirstForge in the Documents sections. It is called 2012 Vision White Paper.
First off, yes, it is retroreflective tape, micro-sphere based, and quite bright. That means that if you use a ring-light, your camera will receive a rather isolated source of light that you control. The FIRST field is a pretty harsh and chaotic arena for vision experiments, but the end of the field where the drivers stand is not harshly lit or the drivers would be staring into the lights. Clearly many frequencies work with retro-reflection, but I’m not sure about its response across the spectrum including IR. Additionally, while it is possible and pretty easy to replace the lens in the Axis 206, the M1011 is an integrated lens. As a bonus, it is rather hard to see IR, therefore, harder to troubleshoot, inspect, and debug. So, my suggestion would be to go with team colors in the form of an LED ring-light. Or go with small LED flashlights on either side of the camera.
The example code that ships with LV doesn’t attempt to compute angle information, but does include distance calculations. The code includes a color mask and a brightness mask with an optional Open operation and everything else is done with binary particles. The paper also discusses edge approaches.
One final wrinkle to throw into the mix is that there are enough communication paths to be able to do some/all of the vision processing on the laptop and send information back to the robot.
Greg: what is the lens thread on the Axis 206? I wanted to us an IR light last year, but was stumped by the filter in the lens. If you have any other specs that would help locate a reasonable substitute lens, those would be helpful too…
Has anyone tested it with the Kinect? I’d imagine it works well with the Kinect IR emitter (there’s an emitter on the Kinect, as well as an IR camera and an RGB camera), but haven’t had the chance to check it myself.
I believe this is the type of lens I purchased a few years ago, wasn’t too careful with my order, and wound up without an IR filter. The result was very washed out colors.
The lens thread is I think called a 7mm lens mount. I believe the 206 lens has a 4mm focal length.
Thanks! I had already found that page, and was guessing that it was a match. Looks like the mount is 12 mm (M12 X 0.5).
On a related topic: I would like to start my programming team with image processing, but we don’t have a robot, or a practice field, or etc., etc… But it looks to me like you have a pile of images of your test field. Would you be willing to post them somewhere in this forum or some other suitable place so we can all download and start practicing our image processing chops?