Moved to JAVA - help with vision tracking

Hey CD!
I tried to find good documentation or tutorials but failed, so im asking here…

We recently decided to move from LabView to Java due to numerours reasons. We have arleady programmed our 2016 robot in Java to perform as we want it in all but one things - the image proccessing.

We wish to learn how to use 2 things:

  1. processing on the RIO itself using Java libraries and a regular webcam (say, the kit one).

  2. Using the Jetson tk1 Nidia board as a co processor. I saw the zebracorn whitepaper, but i really dont know where to start there. I know we need to use Linuk and OpenCV for that (Pyhton?), but are missing some basic guides on how to set it all up to communicate etc.

If anyone could direct us to some kind of material for both those 2 things, we will know how to keep going.

Thanks

I would advise against using vision tracking with a USB webcam on the RoboRIO, mainly because the software/hardware isn’t very optimized on the RIO’s USB ports, so having a camera even plugged in significantly increases CPU usage. I’d suggest an IP camera if you wanted to run on-robot vision and also it would likely be best to use it with GRIP - 1058 uses GRIP on the Driver Station to process our images for vision tracking.

There are some guides for using GRIP on the GitHub wiki as well as on ScreenSteps.

I wrote the code for my team’s (2084) vision system last year, and we used Java running on an Jetson TK1. We used Java and the official OpenCV Java wrappers for most of our vision code. You can use any language supported by OpenCV without much performance difference because the computationally expensive algorithms are running as native code.

I did implement some parts of the algorithm in C, called from Java using the JNI. This allowed us to use the OpenCV CUDA (GPU) libraries, which are not available in the Java wrapper.

We used NetworkTables to communicate the distance and heading (we directly interfaced our NavX with the Jetson) of the goal to the robot.

If you want to use the Jetson, it isn’t that hard. It comes with Ubuntu preinstalled, and you can connect it to a monitor, mouse and keyboard and use it like a normal computer if you want. You will want to get familiar with how to use the command line and SSH, because this will make things much easier once you mount the board on the robot. We mounted ours in a 3D printed case and powered it using a DC-DC converter from Polulu (I don’t have a link at the moment).

Thanks!
We’ll start looking into it and post questions if we have any.

What kind of FPS were you getting? Do the GPU calls make a big difference? We ran our vision code on a raspberry pi and wrote it in C++.

Without the GPU we were normally getting around 15 fps, and with the GPU it went up to 30 fps (limited by the camera). I never tested how fast it could go it the camera were not limiting it, but one time the frame rate limiting code broke and it measured ~75 fps, but this might have been a mistake. I was very surprised by this performance improvement, given that I was only able to run a small portion of the algorithm on the GPU (color conversion and blurring).

Hmm. I’ll try to mess around with the GPU calls. Seems like it could lead to an improvement. However with our setup, I don’t know if a raspberry GPU will make much of a difference.

You won’t be able to use the Raspberry Pi’s GPU with OpenCV. OpenCV only supports CUDA and OpenCL for GPU acceleration. CUDA only works on NVIDIA GPUs and OpenCL is not supported by the GPU on the Raspberry Pi.