I have not compared on each of the platforms, but from what I have seen, heard, and measured, this is how I'd decide.
The image processing and decompression algorithms are largely integer calculations, so MIPs ratios are a reasonable estimate of image processing ratios. Sites like
http://en.wikipedia.org/wiki/Million... ns_per_second are quite helpful at estimating the tradeoffs.
The IP camera images are compressed and it takes quite a bit of time to simply decompress them. This time goes down substantially with smaller images. Ditto for many of the typical processing approaches. The primary sizes are small (160x120), medium which has 4x pixels and large which has 16x pixels.
The IP camera can supply monochrome images too. So if you don't need color, don't use it. But do you need it?
The LV examples and perhaps the other languages work on both laptops and cRIO. So you can directly compare. The driver station has a chart that shows latency for the round trip of the UDP datagrams. This should give an idea of the TCP latency.
Clearly algorithm selection matters as well. How much info do you need to know? How certain do you need to be?
Similarly, a camera is a sensor. How often do you need to read the sensor? How few images can be processed? What sensors can be used to supplement the camera?
Greg McKaskle