The way the camera works is that it takes a picture, finds all the pixels that fall into a color range, and finds information about them. The information I've used in the past is:
- bounding box (x1, y1, x2, y2)
- The median point (mx, my)
- The pan/tilt (see note)
- a "confidence" (basically a ratio of the color blob area to the area of the image)
- IIRC, some statistical information about the color is also provided.
Note: In 2005, the camera would drive its own servos and report their values. In 2006, the code would not configure this way, instead relying on the RC to do it. (I changed this to the 2005 behavior in my own code.) I do not know yet what the behavior is this year.
I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.
Of course, I'm likely to eat my words when the actual code gets around.
EDIT: Wow. How much discussion occurs while I post!