Translating GRIP to NIVision

So I’m rather desperately trying to crank out some vision processing code in Java so that we can test it on the robot quickly before bag and tag tonight. Problem is, we had previously written all the vision code in GRIP before finding out (yesterday) that GRIP’s incompatible with USB Cameras. So it seems the easiest thing to do at this point is to translate the code over into java using the built-in NIVision classes from WPILib, but I’m a bit confused as to how some of its methods, etc. work because of the lack of documentation out there.

I was wondering if anyone might be able to point me toward relevant classes from NIVision for each step in the processing we’re attempting to do, given the picture of our old GRIP pipeline I attached. Thanks!

Here’s the picture.





GRIP isn’t incompatible with USB cameras. You should try running the v1.3-rc1 release, which fixes the problems that people have most commonly had with USB cameras in v1.2.

Also, if you’re translating GRIP into code, OpenCV is easier than NI Vision, since most GRIP operations are just wrappers around OpenCV functions. We have a reference table of what operations use what OpenCV functions, with links to to OpenCV documentation.