We're trying to improve our NI Vision based vision. We decided to cut the resolution of our image processing frame from the capture resolution of 640x360 to a smaller resolution of 320x180. We hypothesized that this would make our (currently very CPU-intensive) vision run about 4x faster.
The issue is, the code we use that was generated from Vision Assistant refuses to take a scaled image. Here is our NI Vision Assistant generated file:
https://github.com/ligerbots/Strongh...geProcessing.c
And
here is the call that we identified is failing.
The code we used to scale the image before passing it to the generated IVA_ProcessImage() was
Code:
imaqScale(processing_frame, camera_captured_frame, 2, 2, IMAQ_SCALE_SMALLER, IMAQ_NO_RECT);
we also tried this instead, and it didn't help
Code:
imaqResample(processing_frame, camera_captured_frame, width/2, height/2, IMAQ_BILINEAR_FIXED, IMAQ_NO_RECT);
The issue is that with the scaled image, the call to imaqColorThreshold is causing VisionErrChk to immediately jump to Error and exit the processing pipeline.
A call to imaqGetLastError() right after the failure returns
21.
Unfortunately, 21 is not in the documentation as an error code, and imaqGetErrorText(21) returns nothing.
We did find a solution (capturing from the camera in lower resolution to begin with) but we still want to know what's going wrong here, if anyone is able to give some insight.