Vision Tracking Assistance

Hi, I am currently a part of the programming team on FRC Team #4085. I am working with our team to produce an accurate vision tracking system that will accurately depict the aluminum tape beneath the tower’s goal. I have completed some research on my own, but I am currently at a standstill. I was wondering if any of other teams have been looking in to this also (I’m sure they have) and I would appreciate any assistance that can be offered. Our team programs in Java and I wasn’t looking at any specific cameras to use currently, but I do know that our team had issues with too much bandwidth being used in previous years, causing tremendous amounts of lag on the receiver’s end. Any help would be appreciated, thank you!

The aluminum tape? I tought it was reflective tape…

I don’t have any specific camera suggestions for you, but I would advise that you do video either on the roborio or on a dedicated device that is also connected to the robot router(ex: Nvidia TK1, raspberry pi, etc). This will reduce lag because you do not need to wait for the images to be sent all the way to the drive team laptop before they get processed.

Are there any helpful tutorials or code to work off of that would be a helpful start?

I believe this is the tutorials they had for last year to implement tracking:

The link to this year’s ressources are the following:


I am mentoring a rooky team this year, and I am also not sure about the type of camera to use on our robot (webcam or IP camera).

I found this in the first inspire ressources which mentions different possibilities, but I don’t know the pros and cons to the type of cameras:

I’d recommend you read through one of the vision white papers published for FIRST.

Both IP and USB cameras will work. Here are the high level trade-offs.

USB web cameras are less expensive, powered over USB, often harder to mechanically mount, somewhat less configurable acquisition.

IP cameras have built-in and configurable compression. This is an aid if you primarily want the images to be sent to the dashboard. This introduces artifacts if you primarily want to process the images.

USB cameras can require additional roboRIO CPU resources if you want to send the video stream to the dashboard. If you use the “USB Camera SW” option, the images are converted to jpeg on roboRIO. If you use the "USB Camera HW’ option, the camera produces the compressed version, though its compression settings are not adjustable. This is, at least, how those settings affect the LV robot framework.

Greg McKaskle

Awesome. Thanks Greg :slight_smile: