Successful Computer Vision

So we just received out new axis camera and led rings today and I am extremely impressed. With only a 60mm led ring, we were able to illuminate the retro-reflective tape from half-way across the shop. At this point, I am very confident that our robot will have some sort of computer vision this season.

Here is our current algorithm: We are using the NI vision assistant to create a VI for us, so here are the filters applied in the Vision Assistant.

  1. Original Image (Acquired from the axis camera)
  2. Color Threshold using HSL values acquired from sampling using histogram tool. This creates a rough outline of the square with some noise.
  3. Adv. Morphology - Convex Hull: Fills in the rectangle
  4. Particle Filter - Area: Removes any small noise leaving 4 rectangles
  5. Particle Analysis: outputs any information that you want to know about the rectangle(s)

Thank you, I will definitely use this thread as reference when creating ours!

I will post mine for detecting the basketballs here later when I can :slight_smile:

Does the ambient light in the shop (bright or dim) have much affect on the LED/retro-tape readings? In 2009, the difference in lighting between our room and the stadiums blew us out of the water.

Last year we had the same experiences with the reflective tape. Worked like a charm in my basement, worked okay in the shop and the practice field, hardly worked at all on the FRC field. Also I didn’t reference the tape until I was within 4 ft, and it was only to center on the X axis. I was so excited to see how easy it was to get it working at the beginning of last season, only to be disappointed by the actual results.

This year I believe both depth and rgb can play together well. Do a red or blue filter based on team color. Then intersect with the depth image. I believe you will be able to see the red and blue squares very well using this strategy.

I am expecting a significant difference in the lighting between our shop (fluorescent lights) and the can lights in the arena. I am expecting that we will use some of the field time to adjust our settings. I expect that we will have to adjust the white-balance down and that we will have to tweak our HSL values for the filter

Vision systems always have issues with lighting. My co-workers still struggle with things like:

  • Power fluctuations causing light intensity modulation.
  • Skylights in factories that let in light at the wrong angle for 1 hour a day.
  • Random people using flashlights/lasers/camera flashes at the wrong time.

You will have to have controls right on your dashboard to manipulate various settings. It shouldn’t be difficult, but it will take some time. If you can get some practice rounds in, before/during the competitions on the actual field, this is ideal. You will also require a method of switching between settings. (Eg. An enum: [arena|practice|shop|other random place|…]) The more practice you have setting up the controls for your vision system, the faster you will be able to get it right. With enough practice, you may even come up with a algorithm that does the same thing in varying light conditions.
Good luck! I look forward to any examples that you guys post.

The latter portion of the white paper discusses setting up the camera so that you are processing good images. Experiment with the WB and exposure and brightness to find a setting that is consistent between your shop and outdoors.

According to wikipedia, an overcast day is 1000 lux, and I believe the lighting for the FIRST field is about somewhere between 85 and 95 foot candles. These are comparable levels of light.

It is also typical for the FIRST field having the lights aiming into the field from each long edge. They do not shine directly into the eyes of the drivers at the joysticks. So the illumination drops off rapidly there.

If you have questions about what a camera setting does, you can look into the WPILib documentation, the Axis documentation, or perhaps ask here.

Greg McKaskle

Question: would thisbe the sort of tape required for the vision target? I’m not sure of the exact specifications that the vision target requires, so I really need to have this verified.

We are attempting to solve the problem by looking at the black rectangle rather than the reflective one. We have pretty good results in most lighting conditions.

You should have a piece of the tape in the kit of parts. If you decide to purchase more, I’m almost certain that is the right product.

Greg Mckaskle

What are the LED rings you speak of? Were they in the kit of parts? If not, do you mind sharing the part number?

TIA

Go to superbrightleds and look up 60 mm “Angel Eyes”.

What is the legality of using superbright leds? I can’t find anything in the manual. When do the leds become a nuisance to the game and called out by the referees?

Has anyone seen a difference as to tracking the reflective tape vs. the black rectangle? Does anyone think one will be better than the other? Will the stadium lighting effect tracking the black rectangle?

That came up between a few of our team members when we were discussing which of the two tapes to track. We are unsure as of yet :confused:

I suggest trying to use LEDs so that the retroreflective tape is put to good use. The lighting onboard the robot, along with the direction discrimination of the tape will allow you to have the best results. Relying on just the dim colour patterns will probably yield bad results. Comparing the differences between using a light source on the robot, and only ambient light was remarkable. This was tested last year in our fairly bright office.

Either way, you can always use both methods, mounting a light source, and using other patterns for recognition. If you come up with a measure of quality of your vision recognition, your program could automatically choose the best fit. The extra cRIO resources consumed can be limited by using a slightly lower framerate. I believe that we managed to have a few independent algorithms running on the cRIO with minimal performance issues.

Can you think of a test that would measure the effectiveness? Can you measure which will fail under different conditions? In reality, there is no perfect way to do this or most other measurements, and that is why various measurement techniques were created. Determining which is appropriate under the circumstances and how to make the evaluation is the valuable skill to learn. If you have data to backup your conclusion, I’d be happy to help you understand the data.

Greg McKaksle

I didn’t do most of the vision stuff around here last year. For Lunacy, I got something that worked fairly effectively in various situations, but was only a single algorithm. Unfortunately this is not part of my day job, so time is a very limited resource.

The measure of quality calculation would be limited to each specific implementation. (No golden answer, as you have mentioned.) I was thinking along the lines of using some statistics calculations, but this is nothing more then a thought floating around at the moment.

I can’t seem to find my archived code anywhere, but I remember taking the vision example from last year and producing a quality measurement which aid in determining if the targets found are valid. Anything that I do come up with, I will post here on CD.

Superbrightleds.com LED rings

<R08> "Shields, curtains, or any other devices or materials designed or USED to obstruct or limit the vision of any drivers and/or coaches and/or interfere with their ability to safely control their Robot

My team thought one way to get around this was to have a switch that would turn the LEDs off, unless we wanted to shoot a ball. In that case the LEDs would turn on and the camera would orient the shooter.

Using IR LEDs would also get around this, but the tape may not reflect it. Also you would have to remove the IR filter in the camera lens.