Vision Tracking

Are all teams doing vision tracking with a limelight camera or are some teams using axis cameras too?

I suspect most teams are doing neither. Most are probably using a USB camera attached to the RoboRio, with some using a coprocessor.

We are using 2 JeVois Smart Cameras for Vision processing.

We’re using Limelight. It’s great. For this years game we tune the camera to see only the reflective tape and place a crosshair where we want the robot to end up. Limelight gives us a horizontal and vertical offset to the crosshair and we calculate our drive values using those offsets and a p-loop.

Here’s our code to calculate our drive values: https://github.com/PearceRobotics/deep-space-2019/blob/master/src/main/java/org/usfirst/frc1745/deepspace2019/subsystems/vision/Limelight.java

Here’s our actual Limelight configuration: https://github.com/PearceRobotics/deep-space-2019/blob/master/limelightPlanoSettings.vpr

We are using a Microsoft Lifecam with a raspberry pi running the FRC image. It’s a lot cheaper than a limelight, and I would guess almost as easy to use (I haven’t used the limelight).

I would recommend. It’s worked well, but mechanical issues have kept it from being useful.

Why would you run the image on your Pi? Or, by image do you just mean the code you use for image processing?

We are using a Pixy with I2C to track the reflective tape and a Pixy with Analog/Digital X to track the line. We are able to detect and follow the line very effectively.

i mean the code

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.