Extract Raw Pixel Data

I know that there are plenty of pre-made image analysis functions at my disposal, but for my own amusement, I would like access to the raw pixel data in images from the Vision camera. That is to say, I would like to be able to specify a location in an image and figure out what color it is (RGB). If there is a function that enables this, that would be great. If, however, the function is high level and carries a lot of overhead, is there a more efficient way to get this data?

I assume you’re meaning when using the NI Vision libraries on the cRIO. However, which language are you using?

For LabVIEW, see the example at C:\Program Files\National Instruments\LabVIEW 2011\examples\Vision\2. Functions\Image Management\ImageToArray Example.vi (translate the path if you installed LabVIEW in a different location). The Java interface to NI Vision doesn’t seem to include this function. For C++, there appears to be information in this thread on NI.com.

When you call this function a copy of pixel memory will be made (source, see the previous linked thread for an explanation why). This probably isn’t too bad as long as you only need to do it once or twice per frame, but it’s rather likely that you could get better efficiency by using the built-in operations.

A rather radical approach would be to completely roll your own vision solution and use a third-party library to decode the JPEG returned by the camera. Finding a preexisting implementation of such a library should be pretty easy in C++, but would be harder for Java (because the Squawk VM doesn’t contain the full desktop Java API) and even harder for LabVIEW (AFAIK most people just use the NI Vision libraries). Also consider that code that you either find or write probably won’t be as optimized for the cRIO as the functions provided by NI.

If this is primarily to learn how the algorithm is implemented, efficiency doesn’t matter so much, and you can indeed get the pixel data as Ryan described and them implement your convolution or other algorithm in any of the languages. I’ve done this before in LV and back in college in C, just to get a good feel for how the math worked. May I suggest doing the image processing on a file based image rather than a camera. It splits the problem in half and lets you focus on the downstream issues. Later you can substitute the camera for the JPEG file and everything will fit together nicely.

The thing you will discover is that image processing inherently touches a ton of data. An uncompressed color image of 320x240 is 225 kilobytes. Unless the environment is incredibly predictable, you will often need to touch every pixel in the image at least once, and often many times. To make this fast enough to keep up with the real world forces you ultimately to write the code using every performance trick you can, even SSE instructions, and adds tons of complexity to an otherwise mathematically simple loop. For this reason, once you understand what you need in your implementation, you will likely find that you want to switch to using the professional implementation in a library like NIVision or OpenCV. It is great to be able to understand how they work and compare them, but impractical to make them as fast as needed.

Greg McKaskle