View Single Post
  #4   Spotlight this post!  
Unread 07-10-2016, 13:56
jreneew2's Avatar
jreneew2 jreneew2 is offline
Alumni of Team 2053 Tigertronics
AKA: Drew Williams
FRC #2053 (TigerTronics)
Team Role: Programmer
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Vestal, NY
Posts: 189
jreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura about
Re: Moved to JAVA - help with vision tracking

Quote:
Originally Posted by Ben Wolsieffer View Post
I wrote the code for my team's (2084) vision system last year, and we used Java running on an Jetson TK1. We used Java and the official OpenCV Java wrappers for most of our vision code. You can use any language supported by OpenCV without much performance difference because the computationally expensive algorithms are running as native code.

I did implement some parts of the algorithm in C, called from Java using the JNI. This allowed us to use the OpenCV CUDA (GPU) libraries, which are not available in the Java wrapper.

We used NetworkTables to communicate the distance and heading (we directly interfaced our NavX with the Jetson) of the goal to the robot.

If you want to use the Jetson, it isn't that hard. It comes with Ubuntu preinstalled, and you can connect it to a monitor, mouse and keyboard and use it like a normal computer if you want. You will want to get familiar with how to use the command line and SSH, because this will make things much easier once you mount the board on the robot. We mounted ours in a 3D printed case and powered it using a DC-DC converter from Polulu (I don't have a link at the moment).
What kind of FPS were you getting? Do the GPU calls make a big difference? We ran our vision code on a raspberry pi and wrote it in C++.
Reply With Quote