Most recommended way/application for Image Processing?

Hey, my name is Yair from team M.A 5951, head of programming team.
This year we want to learn Vision Processing, but we don’t know what program to use, should we even use a program or use the NIVision libraries for the image processing? If you guys could share from your experience we will be very thankful.

A word of advice from what we did this year:

We did our vision processing on the dashboard this year. If you do, be prepared to work with latency. This means calculating angles and using the gyro as your sensor, not the camera.

NI Vision, onboard
NI has their own image processing libraries (meant for industrial automation of course), and conveniently these are already installed on the roboRIO.
Start with Vision Assistant. It’s a graphical image processing pipeline editor where you can load in images saved from your robot. Add a color filter to identify the target, then add some more filters to smooth out the contour and remove noise, and then add a particle measurement step to get the information you need about the target. Then, you can generate C code from Vision Assistant to integrate into your project.
Documentation is located in C:\Program Files (x86)\National Instruments\Vision\Help
WPILib also has some wrappers over NI Vision USB camera capture, image objects and basic operations.

  • Easy to integrate into your code since it’s directly onboard the roboRIO
  • Supported by WPILib
  • Integrates with USB cameras (through IMAQdx)
  • Image processing onboard the RIO can be slow
  • From what I’ve seen, NI vision doesn’t really expose its internal data. For example, I don’t think it’s possible to get the actual point array of a detected particle/contour, only the summary statistics that the analysis functions give
  • If you screw something up, you might get an undocumented error code “21” and nobody will be able to help you :frowning:
  • NI Vision is proprietary. No source code for you! :rolleyes:

There are many ways to use OpenCV: onboard, on the drive laptop, or on Android or a Raspberry Pi.
OpenCV is an open source vision library in C++, but it has bindings for Python, and Java (both desktop and Android). IMO, OpenCV is much more sophisticated than NI Vision (right now I’m playing with the perspective solver, something I don’t believe NI Vision has)
Start with GRIP, WPILib’s own graphical OpenCV processing pipeline editor. Similar to Vision Assistant, load in some sample images and add filtering steps, and an analysis step. GRIP can even upload results to NetworkTables so your robot code can easily get them. Unfortunately, GRIP can only deploy to the roboRIO or a Raspberry Pi, so you won’t be able to use it if you want to run OpenCV on Android.

  • Open source; community support
  • Extremely powerful. If you want to dive into it, there are some very sophisticated features such as 3D reconstruction, object detection, and even face recognition
  • Available in virtually any language you want
  • Runs on virtually any platform
  • Steep learning curve if you want to go beyond the basics (ie what GRIP offers). And the official tutorials don’t even begin to cover all of the features. Alas, this is the problem the plagues many open source projects :slight_smile: