Target Tracking

I am new to FRC Java and see that you must put in code to use a USB webcam. When I looked through all of the Java and Camera FRC tutorials and could not find any mention of what the actual coding is. When I tried to look on here I found some help but not much in the matter of basic camera use. Thanks if you are able to help me.

I have a couple of questions?

  1. Where do you plan to run the code? On the roborio, on the driverstation (I would advise against this), on a co-processor?

If on the roborio just use TowerTracker and make it run as a seperate task on the rio.

If on a co-processor, we have some code that runs on a raspberry pi that transfers necessary data through the network tables.

On the driverstation, use Towertracker and run it with network tables.

  1. Also, are you talking about tracking targets or just getting a feed to the driverstation.

I plan on running this code on the roborio, and I would like to use this to be able to line up our shooter with the upper tower goal in this year’s game.

Alright, so my recommendation would be TowerTracker.

CD Link:

Github Link:

This code is meant to be run on the driverstation, but can easily be ported to run on the roborio. To do this, you need to set up opencv on the roborio. Instructions can be found here. The instructions to set up eclipse to compile opencv code is written for c++ (by me). But I’m not sure how you would do it for java.

Thanks for recommending my program :slight_smile:

my advice is to see if you have an axis camera. If not with the usb camera you will have to run mjpeg-streamer on the rio to stream the usb camera to a web interface where Tower tracker can take the video stream and process it.

You dont have to. If he is running it on the roborio you can just plug in a usb camera and change

videoCapture = new VideoCapture();

to videoCapture = new VideoCapture();

This opens up the usb camera plugged into the roborio. This works with virtuald’s version on opencv on the roborio. However, the version he build doesn’t work with mjpg streams. If you really want to use that, I would use opencv2 that another team compiled.

Yes, however the reason it runs on the driverstation is that MOST laptops teams use as driverstations are much faster then any of the raspi or rio. You want to be able to process them as fast as possible. If vision is done properly the gap between realtime vision processing and a driverstation program will be slightly different but relatively the same. It comes down to the robots using it more effectively. Also for simplicity setting up a driverstation program on windows is easier then navigating through the linux for the rio especially because not everybody wants to mess with the rio or linux.