Vision Processing in Java

Hello, I am currently the head of vision processing in my team’s programming department and I have come across a few issues. First let me clarify that my team is java exclusive, we have switched to the eclipse IDE this year, and we have successfully been able to capture an image, save it to the roboRIO, then process it to find the yellow tote. However, my main problem is that I cannot seem to find a way to isolate the location of a tote on a processed image after it has acknowledged that a tote is in view. I am trying to find the centered coordinates of the tote on the image. It would be EXTREMELY helpful if there was an API for the NIvision tools in eclipse because the javadoc created by eclipse DOES NOT provide adequate information. Thanks for any help.
Team 2035, Robo-Rockin’Bots.

I was having fun re-writing their CameraServer and making a Camera API (with functions like the ones you are describing but more generalized) and was lucky enough to find this early on:

http://www.ni.com/pdf/manuals/371266e.pdf

It describes several examples and stuff that doesn’t pertain to finding a Yellow Tote’s center, but helpful nonetheless. Look around for NIVision IMAQdx and NIVision imaq references and documentation. It’s bothersome because the Java NIVision wrapper is completely undocumented and 30,000+ lines long but it contains a lot.

Note: Not all NIVision functions are available in the Java Wrapper.

I have not done so but looking at WPILib’s vision processing classes may be helpful. I am not sure but I believe you can access documentation through the start menu if you have NIVision installed. I do most of my development at home so I have not check on the team laptop.

If you are able to show me what you did for vision, I would be really appreciative. The first question I should be asking though is did you use OpenCV or did you actually use the NI Vision libraries. If you used the vision libraries I would be very interested to see what you have done. I have been trying to figure this out since the beginning of the build season, and I have recently given up because it was too much of a pain in the behind, and we could achieve the same thing using encoders. If you are willing to share some of your code, assuming it uses the NI vision libraries I would be most appreciative.

Check out some examples here:

Yes, we have indeed been using the NIvision libraries. We have have been using a modified version of the provided sample code for processing color in images (not reflective tape). Here is the link to our GitHub repository with regards to vision.
NOTE: we have two commands for vision; 1) Vision: this is for capturing images and saving them to the roboRIO 2) ProcessImage: this is the command that processes the image saved by the Vision command