Today I was working on some vision tracking code and I was able to load an image of the rectangle on the backboard into NI Vision Assistant and apply HSL thresholding operations on it. I then took those values and used them in my C+ + code (something like Threshold HSLThres(values)) and then applied these values to an image that was grabbed from the camera. At the end of the code I was left with a binary image that should only highlight the reflective tape that the rectangles form.
Now, where I’m having a little trouble is trying to see this processing happen in real time. Is this possible using the default Dashboard (so basically seeing the binary image where the camera feed is supposed to be)? I have looked into the DashboardDataSender but I am unsure of how to call and what parameters to send to the function SendVisionData.
Also, once this processing is done, how can I tell the camera to “lock on” to this target?
Lastly, are there any other useful methods I could use to make the image processing more efficient? While I was coding this, I found that the C Rio would get bogged down and the robot started to lag. This may be in part with bad coding and memory allocation but I am wondering on how exactly some teams are offloading their camera processing to their Driverstation laptops.
What size image were you processing, and how many images per second? It is possible to bog down any computer with image processing. The cRIO is pretty capable, but clearly has limits. If you use smaller images and limit the frames per second, you can process the images in a parallel thread and not interfere with teleop. If you put it into teleop, you will definitely slow teleop, as simply decoding the JPEG may take 20ms.
Processing the images on the laptop is also pretty straightforward. NIVision has the same entry points in a Windows DLL as on the .out library on the cRIO, in fact a superset of entry points.
As for viewing the image as you are processing it. This will also be easier on the PC. I don’t know the details, but I’m pretty sure that NIVision has functions for drawing into a window. Your code can open the window and share the handle.
Alright, we will trim down the image size as you mentioned and hopefully that will clear some of the memory issues we were having.
Now, when referring to NI Vision, are you talking about the Vision Assistant or the nivision.h file or nimachinevision.h files? Also, could you elaborate a little on the entry points into Windows? How exactly would you create a new window on your driverstation to view the processing of the images?
I was referring to the Vision libraries which are what LV calls into for vision, it is what C programs on the cRIO call into for vision. It is what Java and Python can call into provided the wrappers are written. The libraries also exist on Windows. Finally, Vision Assistant is written in LV and also uses the libraries. Vision Assistant is more of an interactive tool for exploring, and the libraries are for carrying out discrete steps.
When I was looking at the documentation for NIVisionCVI, it had entry points for drawing into a window, perhaps creating a window. Those will not work on the cRIO, but will work if running on a Windows PC.