449's code for tracking two colors

FIRST recently posted some code to track two colors (http://joule.ni.com/nidu/cds/view/p/lang/en/id/1215). However, our team had already written similar code, and after comparing the two, we think that our team’s code is easier to understand and perhaps even faster. The main difference (as far as I can tell) is that FIRST’s code will move the camera to track the object while our code only returns a position.

In the spirit of gracious professionalism, we decided to make our team’s code available to everyone. You’re welcome to use this code and/or modify to suit your own team’s needs. If you find anything wrong with our code*, please let us know as well. Also, you’ll need to change the color value ranges and minimum area and distances.

*If you look at the image outputs, you might see that they “flicker” occasionally. We think this is only a problem with the image output because the numerical output parameters look fine.

EDIT: fixed parenthesis in link

LocateTwoColoredObjects 449 for Chief Delphi 01-15-09.zip (86.2 KB)


LocateTwoColoredObjects 449 for Chief Delphi 01-15-09.zip (86.2 KB)

wow, how can i thank you, im loading it into my program now. : ) !!!

The posted code looks like it thresholds the entire image two times, one for green, one for pink. Then it compares the position of the largest pink and largest green. This approach will work under plenty of conditions, but there reasons that the NI code is more complex.

First, I’ll explain the NI code, then try to detail the situations where they will give different results.

The NI code thresholds the entire image once, using pink by default, then goes through the particles large-to-small, looking nearby for a matching green.

How do they differ? Performance is the first difference. If the whole image is 320x240, that results in 76,800 pixels to look at. Doing this again for the second color doubles the work to 153,600. Not bad, but let’s count the NI compares. The first color takes 76,800, then the largest particle is used to subset two image areas to threshold them. Even when really close to the object, this will usually be less than 80x80 each, or 12800 pixels, resulting in 89,600 pixel compares. As the target gets further away, the pixel count goes to 76,800 when no particles are found. I don’t really think you will run could build a scene that would do 2x pixel compares. Assuming the image copies are quick, the more complicated algorithm actually should run faster.

How else do they differ? Let’s pretend that the image captured by the camera is pretty noisy – robot bumpers, colored robots, t-shirts, hats, mohawks, etc. Because of lighting variation, the color threshold is intentionally set to be pretty tolerant. This means that something pink may be close enough to the pink of the target to pass the threshold. In fact, something orange may pass as well, along with some stuff colored red. This means that the color threshold will often have objects in it other than the target. When one of those particles is actually larger than the target, it isn’t sufficient to look at only the largest particle. It is good to keep going down the list looking at each particle to see if it has the green nearby to qualify it as a target.

I hope this examination made sense. I’d encourage you to verify the differences for yourself. You may very well come up something even better, and at the very least you will learn how they work, and when they may fail, so you can use them to your advantage.

Greg McKaskle

Where exactly did you find the code released by FIRST to track two colors? The link in your post does not work anymore.

Thanks
-Tanner

The link above has a parenthesis on the end. It works fine without that.

Greg McKaskle