Tower Tracker 1.0

Team 3019 graciously presents their vision tracking program for everybody to use and borrow! Now instead of dreaming about the perils of computer vision YOU can bring your team the joy of having a robot that is capable of tracking the target with ease! No more Grip crashes or weird deploys, can calculate fun things such as: distance to target, and angle to target! to be used to auto align and auto aim!

if you are going to modify the code, all i ask is give me and my team credit in a comment at the top of the code and comment your suggestions and or your praise!

to install:

  • download opencv 3.1 from here
  • download Network table 3.0 jar
  • make a new project in eclipse
  • make opencv and networktables added as a user library to the build path of your new project
  • copy opencv_ffmpeg310_64.dll from C:\Path o\opencv\build\bin
    to C:\Windows\System32 - add the code in a class that is named TowerTracker.java
  • when your ready to export
  • export the .jar file as a runnable jar
  • move the .jar to a folder similar to this
  • run the .jar with “java -jar c:\Path o\TowerTracker.jar” on a command prompt window

the code is just an example of what it can do, i can add network table stuff soon but i thought i would publish it first!
github link

want to see an example of what it can output?
here you go!

how it works: using an axis camera or a mjpeg streamer you can use a stream of a webcam to process images using an opencv program that runs on the driver station computer. This program can be modified to run on a coprocessor and directly input to the roborio for even better results because network tables can only go at 10hz vs the camera stream which is 30hz…this program can easily be ported over to c++ and python and would probably run better with those as c++ and python are way more supported then java with opencv.

Thank you!

We’re looking at non-vision tracking options this year since the retro-tape is so high up. But we’ll take a look at this code, too.

WOW! Awesome work! One question, Can this be deployed to run on the roborio ?

Where do I install the Network Table 3.0 jar

Yes! The only problem with running it on the rio is with the vision tracking will take up too much of the rio’s resources and might lag out the robot. If you are going to do this I would suggest using a linux board such as a raspi 2 or an odroid c1+, they both run linux so they have similar interfaces and have more support. This will also allow you to do much more advanced tracking such as real time object tracking and detection

Network table goes in as a user library(tutorial) in your eclipse project, you will not need to do anything when you export the .jar unlike for opencv.
When you export the .jar file put it in a folder that looks similar to this where opencv_java310.dll could be opencv_java310.so for linux

Could this run on a C.H.IP computer?

as long as it can run java, it can run…so theoretically, but this begs the question if you would WANT to run it on a chip computer…why wouldnt you upgrade to atleast a raspi or an odroid? at 35$ they provide atleast 4x the processing power, which makes every ounce of difference if you want realtime video processing…you could get away with only one or two frames…

Im not an expert on multithreading, but would you still experience the lag if you ran the the code on a seperate thread on the roboRIO?

Thank you for this code, I have ported it to c++. I am running it on a separate thread, but just with a static image for now.

I am not an expert either which is why it is not multithreaded but my understanding of the Rio is that it already uses both cores to run the robot code. So I think it still might, of course why wouldn’t you just test this and get back to me, it’s an open source project

Thank you so much for making your java vision tracking solution public. For our team vision tracking seemed extremely daunting but this made it a realistic task.

I am trying to run the jar but I get a couple of errors. I may have added the network table user library incorrectly.
The steps I took are as follows: I made a new java project, added the user library for opencv following this tutorial: http://docs.opencv.org/2.4/doc/tutorials/introduction/java_eclipse/java_eclipse.html.
Then I added the network table user library from this directory: C:\Users\Curtis Johnston\wpilib\java\current\lib.
When I try to run the executable jar from command prompt I get the following errors.


platform: /Windows/amd64/
Exception in thread "main" java.lang.UnsatisfiedLinkError: no ntcore in java.library.path
	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1864)
	at java.lang.Runtime.loadLibrary0(Runtime.java:870)
	at java.lang.System.loadLibrary(System.java:1122)
	at edu.wpi.first.wpilibj.networktables.NetworkTablesJNI.<clinit>(NetworkTablesJNI.java:57)
	at edu.wpi.first.wpilibj.networktables.NetworkTable.initialize(NetworkTable.java:42)
	at edu.wpi.first.wpilibj.networktables.NetworkTable.getTable(NetworkTable.java:176)
	at testing.TowerTracker.main(TowerTracker.java:80)

Yeah, I will tonight. I’m using static images now, but tonight i can set up some tape and test.

Again, thank you for this code, it is very helpful.

Would you be open to sharing the C++ version?

for some reason first still distributes networktable 2.0, you are looking for the network tables 3.0
the specific file you want is the edu.wpi.first.wpilib.networktables.cpp:NetworkTables:3.0.0-SNAPSHOT

there it has the instructions for downloading the newest .jar file for the network tables…I made the same mistake when making this

The roboRIO has two cores and a modern linux scheduler. All processes and threads will be assigned to a processor core based on priority and history of execution. The default robot code doesn’t use the entire roboRIO to execute, and in fact can be made much lighter and efficient if that is what the team chooses to do.

It is quite easy to consume an entire core on any computer by writing one loop without a wait or delay of some sort. At that point, you can add more cores or fix the problem.

There are intentionally many ways to approach the vision processing challenge, and the tradeoffs are as team-based as technical. I fully expect to see awesome processing based on the DB laptop, based on coprocessors, and based on just the roboRIO. None of these are, in my opinion, a bad approach. And of course there are teams who will solve the challenge with no camera at all.

By the way, the DS shows you the CPU trace of the roboRIO in realtime. Just click on the second or third tab on the right side. This info is also logged and can be reviewed using the Log File Viewer after a practice or a match. If the robot feels sluggish, you can try to identify if it was because you maxed the CPU or something else.

Greg McKaskle

https://github.com/team2053tigertronics/2016Code/tree/master/Robot2016/src/vision

Its a bit messy, and the algorithm is a bit different, but you can change it easily. Also, im getting resource initialized error after a while. It might be an array error.

Also interested in running this on RoboRio and in C++. Anyone have any luck compiling a C++ WPIlib robot program with some opencv in it? If so, I’d love to hear how you did it.

Thanks for sharing that! How did you get the opencv libraries onto the roborio? I’ve been having some trouble with that

wow! Amazing!
I just wish that we had something like that in labview :confused:

Thanks! I will see what we can do and get back in touch with you for any improvements we can make.
Ideally, I’d love to see this processed on board with a raspberry pi or arduino board, but that’s version 2.0 stuff.

It works with labview! all you have to do is output to a network table, and from there you can get the values in labview.