Log in

View Full Version : Reflective Tape Purpose


mikegrundvig
09-01-2012, 00:39
I'm sorry, this is a REALLY stupid question but I'm going to ask it anyways as I want to understand. What's the purpose of the reflective tape on the backboards. Is the assumption that we can get a camera on the robot to track our distance and orientation to the backboard with the tape? That's my assumption but in practice, that's a very hard problem to solve and my tiny bit of experience with computer vision makes me cringe to try it. The limited processing power available combined with the low resolution of the cameras and "swamping" overhead lights make this seem very nasty.

In theory, you can determine your angle in relation to the backboard as well as your distance by knowing the size of the rectangles in advance and seeing how much they "deform" and shrink. Your distance is determined by how large they are and you angle by how much they have skewed. This seems feasible but my experience with computer vision is with using feducial markers and very short distances. Even a sheet-of-paper-sized marker only works for about 8 feet on my webcam and that's in pretty good lighting.

Is anyone planning on really trying to do computer vision for those targets? Is there some trick to making them show up better? I'd love anyone's thoughts on this. Thanks!

-Mike

KrazyCarl92
09-01-2012, 00:45
Remember that this is supposed to be retro-reflective tape, meaning that light will be reflected back to the source. This means that maybe shining some sort of light may allow you to better pick up the rectangles and distinguish the shapes from the rest of the image. In any case, good luck!

mikegrundvig
09-01-2012, 00:47
Remember that this is supposed to be retro-reflective tape, meaning that light will be reflected back to the source. This means that maybe shining some sort of light may allow you to better pick up the rectangles and distinguish the shapes from the rest of the image. In any case, good luck!HA! I had something in my post about adding a big IR flood to the robot and using an IR pass filter on the camera to fix the lighting problem. Sounds like I might have been headed the right direction. Thanks!

-Mike

Andrew Lawrence
09-01-2012, 00:47
Remember that this is supposed to be retro-reflective tape, meaning that light will be reflected back to the source. This means that maybe shining some sort of light may allow you to better pick up the rectangles and distinguish the shapes from the rest of the image. In any case, good luck!

Are we allowed flashlights on our robot? To point at the retro reflective tape, not the other drivers.

davidthefat
09-01-2012, 00:49
Are we allowed flashlights on our robot? To point at the retro reflective tape, not the other drivers.

I don't think that it is required to see it. The boxes have black outlines. That should be sufficient.

ratdude747
09-01-2012, 00:51
Are we allowed flashlights on our robot? To point at the retro reflective tape, not the other drivers.

It was last year, In fact it was specifically suggested at kickoff. I would be inclined to say yes; I saw nothing in the manual specifically barring non-concentrated light sources.

Grim Tuesday
09-01-2012, 01:11
It was last year, In fact it was specifically suggested at kickoff. I would be inclined to say yes; I saw nothing in the manual specifically barring non-concentrated light sources.

Well, you would have to modify them so that they don't have batteries onboard, but yes, they are legal.

We made an LED array last year to light up the retro reflectors. It worked well, but due to encoder issues never saw any use.

Tom Line
09-01-2012, 01:38
First, are you sure this is retroreflective tape? It's just called reflective tape in the KOP list.

Secondly, if you're considering IR, you'll have to replace the entire lens of the provided webcam. It has a film that filters all IR out.

I'm looking forward to playing with this. We had a horrid time with last years vision targets.

mikegrundvig
09-01-2012, 01:43
Secondly, if you're considering IR, you'll have to replace the entire lens of the provided webcam. It has a film that filters all IR out.Yup, that's what I was thinking. Just need to see if the rules allow it. On many cameras it's as simple as removing a filter. I've done it to a pair of webcams in the past and it worked great.

I'm looking forward to playing with this. We had a horrid time with last years vision targets.And that's what scares me. My experience with getting robots to "see" things has been poor consistently. I'm really unsure if it's worth even the effort. Did any team get vision targets working well?

-Mike

DjMaddius
09-01-2012, 06:59
If you use the example tracker you can easily modify it to your needs. I believe tracking will win or loose the game this year. You either auto track the entire time so you can make baskets 90% of the time or you dont track and maybe get 10% of the baskets. Its going to be a difficult feet for everyone but every year there is a win or loose situation and I believe thats this years.

Personally, we are doing complete auto tracking. Trajectory planning and all in the code. Going for a 80% scoring throw from anywhere on the field. But really this depends a LOT on the mechanical also. They have to get the thrower throwing consistently before I can do any math to predict where it will land.

Greg McKaskle
09-01-2012, 07:40
There should be a white paper on the NI site, but I haven't been able to find where they put it. Fortunately, Brad also posted it to FirstForge in the Documents sections. It is called 2012 Vision White Paper.

First off, yes, it is retroreflective tape, micro-sphere based, and quite bright. That means that if you use a ring-light, your camera will receive a rather isolated source of light that you control. The FIRST field is a pretty harsh and chaotic arena for vision experiments, but the end of the field where the drivers stand is not harshly lit or the drivers would be staring into the lights. Clearly many frequencies work with retro-reflection, but I'm not sure about its response across the spectrum including IR. Additionally, while it is possible and pretty easy to replace the lens in the Axis 206, the M1011 is an integrated lens. As a bonus, it is rather hard to see IR, therefore, harder to troubleshoot, inspect, and debug. So, my suggestion would be to go with team colors in the form of an LED ring-light. Or go with small LED flashlights on either side of the camera.

The example code that ships with LV doesn't attempt to compute angle information, but does include distance calculations. The code includes a color mask and a brightness mask with an optional Open operation and everything else is done with binary particles. The paper also discusses edge approaches.

One final wrinkle to throw into the mix is that there are enough communication paths to be able to do some/all of the vision processing on the laptop and send information back to the robot.

Greg McKaskle

DjMaddius
09-01-2012, 09:14
There should be a white paper on the NI site, but I haven't been able to find where they put it. Fortunately, Brad also posted it to FirstForge in the Documents sections. It is called 2012 Vision White Paper.

First off, yes, it is retroreflective tape, micro-sphere based, and quite bright. That means that if you use a ring-light, your camera will receive a rather isolated source of light that you control. The FIRST field is a pretty harsh and chaotic arena for vision experiments, but the end of the field where the drivers stand is not harshly lit or the drivers would be staring into the lights. Clearly many frequencies work with retro-reflection, but I'm not sure about its response across the spectrum including IR. Additionally, while it is possible and pretty easy to replace the lens in the Axis 206, the M1011 is an integrated lens. As a bonus, it is rather hard to see IR, therefore, harder to troubleshoot, inspect, and debug. So, my suggestion would be to go with team colors in the form of an LED ring-light. Or go with small LED flashlights on either side of the camera.

The example code that ships with LV doesn't attempt to compute angle information, but does include distance calculations. The code includes a color mask and a brightness mask with an optional Open operation and everything else is done with binary particles. The paper also discusses edge approaches.

One final wrinkle to throw into the mix is that there are enough communication paths to be able to do some/all of the vision processing on the laptop and send information back to the robot.

Greg McKaskle

Thanks for the information. I can't seem to find the paper though.
http://www.chiefdelphi.com/media/papers/

Is this the correct location of the paper supposedly?

RufflesRidge
09-01-2012, 09:16
Thanks for the information. I can't seem to find the paper though.
http://www.chiefdelphi.com/media/papers/

Is this the correct location of the paper supposedly?

It can be found here:

http://firstforge.wpi.edu/sf/docman/do/listDocuments/projects.wpilib/docman.root

Greg McKaskle
09-01-2012, 10:43
I hadn't thought to upload it there. It is on its way.

Greg McKaskle

gnunes
11-01-2012, 12:50
Greg: what is the lens thread on the Axis 206? I wanted to us an IR light last year, but was stumped by the filter in the lens. If you have any other specs that would help locate a reasonable substitute lens, those would be helpful too...

MagiChau
11-01-2012, 13:34
In the FIRST Choice there is an LED light ring you could use to reflect off the tape. http://www.andymark.com/Ring-light-fc12-60-p/fc12-60.htm

MysterE
11-01-2012, 14:19
Has anyone tested how far the retro-tape will be able to 'reflect' the ring light from? (IE: From the other end of the field?)

Rizner
11-01-2012, 18:14
Has anyone tested it with the Kinect? I'd imagine it works well with the Kinect IR emitter (there's an emitter on the Kinect, as well as an IR camera and an RGB camera), but haven't had the chance to check it myself.

Greg McKaskle
11-01-2012, 18:49
I believe this is the type of lens I purchased a few years ago, wasn't too careful with my order, and wound up without an IR filter. The result was very washed out colors.

http://www.edmundoptics.com/products/displayproduct.cfm?productid=2196

The lens thread is I think called a 7mm lens mount. I believe the 206 lens has a 4mm focal length.

Greg McKaskle

gnunes
11-01-2012, 22:36
Greg-

Thanks! I had already found that page, and was guessing that it was a match. Looks like the mount is 12 mm (M12 X 0.5).

On a related topic: I would like to start my programming team with image processing, but we don't have a robot, or a practice field, or etc., etc.. But it looks to me like you have a pile of images of your test field. Would you be willing to post them somewhere in this forum or some other suitable place so we can all download and start practicing our image processing chops?

Cheers,
-Geoff Nunes

Greg McKaskle
11-01-2012, 22:46
Ahh, yes. I was doing that from memory and see the M12 now. As for the photos, I would like to check with the folks who that field belongs to first. In the meantime, can someone else put up photos?

Greg McKaskle

robokiller
12-01-2012, 01:28
I was playing around with the reflectors and the kinect and found that at any distance the reflectors appeared to be out of range on the depth finder. Is it possible that the IR emitter and receiver are just far enough apart that the IR light is reflected back to the IR emitter and not being picked up by the IR receiver.

-- Jaxon Weis

RufflesRidge
12-01-2012, 04:38
I was playing around with the reflectors and the kinect and found that at any distance the reflectors appeared to be out of range on the depth finder. Is it possible that the IR emitter and receiver are just far enough apart that the IR light is reflected back to the IR emitter and not being picked up by the IR receiver.

-- Jaxon Weis

The Kinect "Out of Range" can happen in both directions. Having an object too close, or washing the IR grid out with another IR source will give you "out of range" readings on the depth feed. Using OpenKinect or another package which can access the raw IR image may result in more success.

dellagd
12-01-2012, 06:59
If you use the example tracker you can easily modify it to your needs. I believe tracking will win or loose the game this year. You either auto track the entire time so you can make baskets 90% of the time or you dont track and maybe get 10% of the baskets. Its going to be a difficult feet for everyone but every year there is a win or loose situation and I believe thats this years.

Personally, we are doing complete auto tracking. Trajectory planning and all in the code. Going for a 80% scoring throw from anywhere on the field. But really this depends a LOT on the mechanical also. They have to get the thrower throwing consistently before I can do any math to predict where it will land.

I want my team heading for ghe same thing. I think it will be a must as I cant see aiming and judging distance with a human from 40 feet away is gonna be to easy.

mikegrundvig
12-01-2012, 09:42
I think it will be a must as I cant see aiming and judging distance with a human from 40 feet away is gonna be to easy.
This is a source of disagreement between me and one of the mentors on my team. I completely agree with you. If we had many shots to try it would be one thing but with only three balls I feel the robot has to be able to either take the shot itself, or provide distance data accurately for a human to line the shot up. I think the only way for a human to do it without serious camera assistance is if they design the bot to shoot from set positions and just work themselves into those positions. Trying to eyeball both shot angle and power from across the court is going to be very hard.

-Mike

dellagd
12-01-2012, 17:10
This is a source of disagreement between me and one of the mentors on my team. I completely agree with you. If we had many shots to try it would be one thing but with only three balls I feel the robot has to be able to either take the shot itself, or provide distance data accurately for a human to line the shot up. I think the only way for a human to do it without serious camera assistance is if they design the bot to shoot from set positions and just work themselves into those positions. Trying to eyeball both shot angle and power from across the court is going to be very hard.

-Mike

What part of your team are you on? Programming, Drive systems or Gaming systems?

mikegrundvig
12-01-2012, 17:28
What part of your team are you on? Programming, Drive systems or Gaming systems?I'm one of the mentors. This discussion came up outside the students work and just between us as we were talking about the challenge.

Izz1324
12-01-2012, 17:51
Can the sensor that sees the reflective tape judge the distance and height of the tape?

CastleBravo
12-01-2012, 21:30
Can the sensor that sees the reflective tape judge the distance and height of the tape?

Yes, the camera (with proper image processing) can determine the distance, direction, height, etc of the vision target if the retro-reflective tape on the target is properly illuminated.

dellagd
13-01-2012, 06:46
Yes, the camera (with proper image processing) can determine the distance, direction, height, etc of the vision target if the retro-reflective tape on the target is properly illuminated.

By using the slant of its sides while looking at it from an angle? I hope that method can be accurate

Lucie365
13-01-2012, 20:02
We are programming in C++ and I have not found any example for using the camera to locate the rectangular targets. Will there be some sample code soon?