The full blog entry is up and can be read here:
http://cr4.globalspec.com/blogentry/...amming-CMUCam2
It discusses a little bit more than the camera, just to let you know. Anyways, for now here are the basic steps for how it works:
First, put the camera in Polled Mode, (command is "PM 1"). This means that for every Track Color command sent, only one T packet will come back. This allows you to write back to the camera without it flooding you out with data. (This may not actually be necessary, but it seemed to work more reliably for us when we did this)
Now cycle through this loop:
1. Using the regular camera window, track onto a target. A T Packet will be sent
2. Based on the centroid coordinates in the T Packet (labeled .mx and .my in Mr. Watson's code), draw a Virtual window with the following parameters: 1, 1, centroid_x, 239. Then resend the Track Color command
3. Now draw a virtual window from: centroid_x, 1, 159, 239. Resend the Track Color command.
4. Set the virtual window back to the full view: 1, 1, 159, 239 and go back to step one.
Effectively, what this does is split the window in half based on the center of the initial blob received. To make it more robust you might want to have it only do the splitting if the confidence is below a certain value. When we've cleaned it up and made it work with the dashboard we will post some more.