There are a couple things that will affect frame rate. I’ll cover the ones I remember, and then talk about why frame rate isn’t necessarily that important.
Frame Rate:
One obvious thing that can limit frame rate is the frame rate setting. Setting it to a low number will delay the request for the frame. Setting it too high will request the next as soon as one arrives and will go as fast as other factors allow.
Another issue is the resolution. Each resolution change is a 4x pixels difference. 640x480 images are nearly 1MB bit and take 100ms simply to decompress. All processing will be about four times as expensive as the 320x240. The 320x240 images take about 22ms to decode, and this was the size I used for the examples. This was really just a built in performance handicap, and it is about 4x slower than the 160x120 image. The small image takes 8ms to decode and the processing will similarly be about four times faster.
The next issue, which affects LV more than C++ is the setup of the camera. If you don’t add the FRC, FRC account on the camera, it takes multiple requests for the cRIO to get an image from the camera. The driver doesn’t know which account will work, so it goes through three of them in sequence. For performance, you’d like it to succeed on the first, the FRC, FRC one.
The last issue has to do with various camera settings. The camera will lower the frame rate if it doesn’t have enough light for a good exposure. The settings that affect this are the Exposure, Exposure Priority, and Brightness.
The other things mentioned such as the width of the hue will not have a large affect on performance, but since they will produce more blobs in the mask to analyze, they will have some affect. Also, the Saturation and Luminance will have some affect as well, since any pixels that can be eliminated by Sat or Lum are cheaper than having to do the calculations for Hue. Again, I think these settings are secondary for performance.
Performance isn’t everything:
This may be counter intuitive, but FPS isn’t really super important. More important is the lag, or the latency. This is defined as the time between when something happens in the real world, and when the image processing can notice it. It may seem that higher FPS would fix this, but think about how the awards shows have a 10 second delay to allow the censors to block things that aren’t supposed to be televised. They don’t change the FPS to do this, instead they buffer the images. The places images can be buffered include in the camera TCP stack, the cRIO TCP stack, and in the user’s program. To measure the latency, I used the LED on the front of the cRIO itself, but you can use one off of a digital card if you’d prefer. Turn the LED on, and time the amount of time it takes for vision to receive an image with the LED on. Because the camera exposure and the LED will be unsynchronized, you’ll need to look at numerous measurements and do some statistics to see how things behave.
When I measured this, both the 320x240 and 160x120 sizes had around 60ms of latency with the simplest processing I could have. Clearly this will go up as the processing becomes more complex. What this means is that everything the cRIO senses through the camera is really delayed by some amount based on the settings. For this year’s processing, I think the amount was probably about 80ms. So by the time the cRIO “sees” something, it has already happened by about 80ms.
Why is this important? In order to hit a moving target, you don’t want to shoot where something is. You certainly don’t want to shoot where it used to be. You want to shoot where it will be. If the ball traveled instantaneously, you’d want to estimate relative velocity and aim about 80ms ahead. Of course the orbit balls are anything but instantaneous flyers, and the further away the target is, the longer the flight time. I dont’ have any measured numbers, and it probably depends quite a bit on the delivery mechanism.
Anyway, the point is that a higher fps will give you a better estimate of the velocity, but will not allow you to ignore the latency issue.
I actually don’t have a measurement for latency using C++. It is possible that the numbers are very different.
None of this performance related talk has anything to do with it seeing only one color or the other. Those are tuning issues. The camera has many different color settings for white balance, and lighting will change considerably from event to event. Tilting the target to and from the light will also affect the saturation quite a bit.
The best way to deal with these is to capture images and take them into vision assistant where you can do a line profile or look at a mask and come to understand how these environmental changes will affect the values that the camera will give you. Then you can try different things out to have the camera behave better, mount the camera better, etc. I put some images up on flicker last year that demonstrate some of the issues.
Greg McKaskle