[Poll] Is your team using a vision coprocessor this year?

I’m trying to gauge how many teams are using a vision coprocessor this year given the new April Tags and the release of the Limelight 3. Your vote is much appreciated

  • Yes, we’re using a Limelight
  • Yes, we’re using PhotonVision
  • Yes, we’re using both
  • Yes, but we’re using something else
  • No, but we’re doing vision on the roboRIO
  • No, vision is for posers!

0 voters


My team is using a Raspberry pi 4 with a custom python program I developed for apriltag detection. I am curious to see how popular this is.


That sounds really interesting. Is there a benefit to that over using a tried and tested solution like PhotonVision, or is it just for the challenge?

I think there is a benefit to the enhanced customizability and control over the processing. PhotonVision looks like a pretty good program though. It is also because I jumped to make a program when Apriltags were announced since I couldn’t be sure if other solutions would exist in time. I knew the benefit of Apriltags would make it worth it. I have programmed things like this in the past too and wanted the challenge. Who knows maybe it will help win my team an award.

1 Like

Our current plan is to use a Beelink mini PC to handle 2 cameras for photon vision to do AprilTag field location and to use the Limelight3 with a Coral Edge TPU to use the neural pipelines to track and align on game pieces.


That actually sounds really cool. I would also test out PhotonVision if you haven’t already to make sure your code works close to as well as the recommended solution. The last thing you want is to be at a disadvantage because of tunnel-visioning on your own code.

Vision tracking for cones and cubes is a great idea. If I knew about this game earlier I probably would have made a custom program for that. It is probably relatively easy because cones and cubes are different colors.

I have already ensured I get consistently good results from my program. I got a good program a while ago in the offseason. I haven’t tried PhotonVision, but it might be worth it. I get similar results to what others with RPis get.

we are doing this but we have like 5 people on our software team takleing it

We are planning on using a custom solution running on Orange Pi 5 and global shutter cameras. We’re treating the advent of AprilTags as an opportunity for enhanced student learning, and also want to have the ability to customize based on our own choices. So far we are seeing very good results and performance. For those interested, we have a writeup (including demo videos and performance data) in our build thread here.

With AprilTags being so new, I think we’re going to see a lot more variety in vision solutions on the field this year, which is exciting!


Our current plan is three or four coprocessors, depending on how you count.

We’re expecting two Pi 4s with PhotonVision to connect with OV9281 cameras, one for X+ and one for X-.

The third is actually a Jetson nano which connects to a PiCam v2 and an arducam directional LIDAR for game piece recognition/ beta testing for possible actual 2D LIDAR in 2024.

The fourth, which I personally wouldn’t really call a coprocessor, is a Teensy 4.0 for driving a bunch of sensors (UART, I2C) and circumventing the roborio’s limit of only 1 addressable LED strip.

We are really just insane, aren’t we


This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.