Is it possible to acquire images on the cRIO in c++ and then process them in labview?
-jonathan
Is it possible to acquire images on the cRIO in c++ and then process them in labview?
-jonathan
It is possible to call binary libraries, .out files from LV, but you should have a very good grasp pointers and perhaps even memory management before trying.
Why not use the icons to get images in LV?
Greg McKaskle
Labview is slower. Additionally, c/c++ already is being used within the image acquisition process to decode the jpeg image received.
Do you know how much overhead is incurred when passing the image into the dll and back into labview?
I have had experience with c/c++; we’re using labview because we think it’s easier and development is faster. It looks like if the entire image acquisition process was handled by c++ code, fewer resources would be used since the image would be handed off only once. It would also be easier to use the mjpg method already implemented in c++, rather than rewriting that in labview (and finding/compiling a lib that has support for progressive jpg - unless I’m mistaken, the images in the mjpg stream are progressive and cannot be handled by the library function called by labview).
thx,
-jonathan
Labview is slower. Additionally, c/c++ already is being used within the image acquisition process to decode the jpeg image received.
Do you know how much overhead is incurred when passing the image into the dll and back into labview?
So, how did you reach that conclusion? Both LV and the WPI C/C++ code call into the same exact .out file. They pass in a pointer to a string buffer and an image pointer where they’d like the pixels to be stored. They have the same overhead which is minimal. Similarly, the image processing is making calls into the .out to carry out color thresholds, particle reports, etc. Again, the overhead will be the same, minimal. The heavy lifting of per pixel work is already written and tuned and in the .out.
I have had experience with c/c++; we’re using labview because we think it’s easier and development is faster. It looks like if the entire image acquisition process was handled by c++ code, fewer resources would be used since the image would be handed off only once. It would also be easier to use the mjpg method already implemented in c++, rather than rewriting that in labview (and finding/compiling a lib that has support for progressive jpg - unless I’m mistaken, the images in the mjpg stream are progressive and cannot be handled by the library function called by labview).
The image is just a pointer. Passing it as a parameter means passing just four bytes. If you were doing this in C++, you’d be using a pointer too, four bytes pointing to thousands of pixels. No difference in performance.
The C++ code uses a different TCP command to get the images from the camera. Initially LV used the same, but later switched to SW timed acquisition. I’ve compared them side-by-side, and they are different but equivalent. If there’d been time, both languages would have supported both techniques for getting the images. Here are the tradeoffs.
The MJPG has the advantage of keeping a TCP session alive and having the camera decide when to start acquiring each image. It has the disadvantage of letting the camera send more images than you can process, leading to a TCP buffer of many images, which will appear as a lag between things happening in the real world and when sensed by the robot. This happened again and again during prototyping. The WPI C implementation keeps a parallel task running to consume the TCP traffic even if you don’t use the results. This should keep the buffer emptied.
The LV code requests a JPEG, and the camera responds and closes the session each time. The disadvantage then is a small amount of additional TCP overhead. The advantage is that there cannot be any buffered images as the cRIO initiates the image acquisition. The TCP overhead wasn’t noticeable in side by side tests.
The images returned from the camera are not progressive. Whether the program requests an mjpg or jpeg, each image will arrive as a single packet containing a jpeg frame. The exact same decode call is made to parse the jpeg stream and turn it into image pixels.
There are plenty of differences, but performance really isn’t one of them. I hope this helped. Any other reasons why LabVIEW is slower?
Better yet, any measurements?
Greg McKaskle