Log in

View Full Version : Image processing for vision tracking: blacka and white, or color?


Andrew Lawrence
27-07-2012, 00:02
I have seen multiple examples of vision tracking using images, and have seen it done well in both color and black and white? For what reason would one do color or black and white? Is there an advantage to either?

Greg McKaskle
27-07-2012, 08:29
Color takes three times the data, so black and white tends to take less memory and less time to process.

On the other hand, since we see in color, it is somewhat difficult for us to get our heads wrapped around the B&W image processing techniques. Also, if there is a uniquely colored entity in the photo, color will be an easy task whereas black and white cannot take that approach and must be based on shape, proximity, or other artifacts.

In many of the LV FRC vision examples, both color and BW are used. The color will often be used to identify potential areas, and those areas are further processed, often in B&W, in order to rank them or disqualify them.

Greg McKaskle

Brandon Zalinsky
27-07-2012, 13:37
I have seen multiple examples of vision tracking using images, and have seen it done well in both color and black and white?

I've seen many teams using full-color, high-res displays. 78 Air Strike and 341 Miss Daisy both have very good color vision tracking, just to name two.

For what reason would one do color or black and white? Is there an advantage to either?

B/W takes less processing power, and is (as I hear, I'm not a programmer) easier to code. However, even B/W maxes out the processing power of our cRIO, causing us to offload the calculations to the laptop. So, even though B/W tracking takes up less processing power, it's a disadvantage that it can't be run on the cRIO. If someone knows how to, please let 1058 know!

The other disadvantage of running black/white is a worse camera feed for the driver and operator. This doesn't matter, however, if you use two or more cameras.

Good luck!

Greg McKaskle
27-07-2012, 16:30
Any camera connection can be B&W. The LV palette has a vi to set the camera session resolution, frame rate, and color settings. Note that the simplified dashboard VI doesn't have a parameter for that.

Greg McKaskle

Tom Line
27-07-2012, 18:48
I've seen many teams using full-color, high-res displays. 78 Air Strike and 341 Miss Daisy both have very good color vision tracking, just to name two.



B/W takes less processing power, and is (as I hear, I'm not a programmer) easier to code. However, even B/W maxes out the processing power of our cRIO, causing us to offload the calculations to the laptop. So, even though B/W tracking takes up less processing power, it's a disadvantage that it can't be run on the cRIO. If someone knows how to, please let 1058 know!

The other disadvantage of running black/white is a worse camera feed for the driver and operator. This doesn't matter, however, if you use two or more cameras.

Good luck!

The cRio was perfectly capable of doing vision processing for the game this year. You just need to remember that you only need ONE good image, and you only need to do your calculations once to know where to aim.

Trying to do real time 30 FPS calculations is un-needed.

The time to process a single color frame at medium resolution is under .25 seconds.

Brandon Zalinsky
11-08-2012, 13:20
The cRio was perfectly capable of doing vision processing for the game this year. You just need to remember that you only need ONE good image, and you only need to do your calculations once to know where to aim.

Trying to do real time 30 FPS calculations is un-needed.

The time to process a single color frame at medium resolution is under .25 seconds.

True, I believe we had to because our shooter rattled like a sonofagun and one image wouldn't work.

Tom Line
11-08-2012, 17:27
True, I believe we had to because our shooter rattled like a sonofagun and one image wouldn't work.

We had the same issue initially. We dampened the camera a bit with a different style mount and changed the location. Before that worked though, we changed over to a 3 consecutive picture analysis (three consecutive pictures had to agree on the target location to within a certain tolerance). That changed out targetting time to .75 seconds which really didn't matter much in the scheme of things.