Optimizing Photon Vision on Raspberry Pi 4

We have gotten PhotonVision to work with the new AprilTags and want to increase the frame rate from 16FPS. Any tips on tuning it to achieve a better performance? Would it be worthwhile to buy a external TPU?

1 Like

Try using a lower resolution or lowering the exposure. Also are you using a PI cam or a webcam?

3 Likes

Have you tried overclocking the PI at all? I’ve been using an ArgonOne case to get the CPU cool enough to push it to 2GHz.

That got me to 20-30fps at 640x480 and IIRC 800x600 wasn’t too shabby. Recorded my results here.

Definitely get the exposure as low as possible so that it’s processing as many black pixels as possible. There’s some question of whether using a higher resolution with decimate of 2 might be a workable solution with respect to the distance you want to achieve.

Edit - the Amazon link that I’ve posted at PhotonVision takes you to a different camera (3.6mm) now. There’s a 100deg option that’s cheaper ($28 vs $49) and a different chip (OV9712 vs AR0144), so it’s untested AFAIK, but a similar camera can be found here.

4 Likes

LifeCam right now. I was trying to avoid lowering resolution because that would translate to lost distance when detecting the AprilTag

1 Like

We’re not overclocking it at all right now.

If you think that the PI4 is what you’re going to use, it might be worth looking into. Google around a bit to see how to do it properly, taking it up a bit at a time. Use ā€˜vgencmd measure_temp’ and ā€˜vcgencmd measure_clock arm’ to check your results, also the PhotonVision Settings tab will display that info, usually. I’m using over_voltage=6, arm_freq=2000, gpu_freq=750, but YMMV. Watch for CPU throttling in PV. I had to try two different PIs to get that to work. I’m also using a separate battery to power the PI to avoid power-off issues. https://a.co/5XSYJBV

1 Like

Honestly, with timestamped (somewhat, will get better with nt4) results, 16hz is fine for use within a Kalman filter like wpilibs SwervePoseEstimator and DifferentialDrivePoseEstimator classes. Odometry will fill in well for the timespan in between the readings.

4 Likes

While providing a stable tool for teams to use in 2023 is our top priority, we are also looking into ways to increase performance for teams using PhotonVision. Like others said above, the current performance is definitely useable given how all of the data is timestamped and can be accounted for when using a pose estimator (this will be explicitly shown in our examples when they come out before kickoff).

Possible performance improvements include GPU acceleration for Raspberry Pis, a new AprilTag implementation, OpenCV optimizations, and general performance improvements. Given the complex nature of these changes (and our desire for stability), there isn’t any confirmed date of release (or any confirmation as to if these will make it to release as we don’t want to rush in a half-baked solution that would ultimately harm user experience.) The best way to ensure that these features get added is to contribute, test, and/or to file issues on our GitHub.

Somethings that you can do right now:

  • Stream to your dashboard in as low of a resolution as possible
  • Use a PiCamera instead of a LifeCam. LifeCam overall just isn’t a great choice for performance.
  • Exposure should go as low as it can possibly go while still detecting targets
  • Use as many threads as your platform supports
  • Overclock your pi (this is not an official recommendation as it can mess things up, just something that people do)
  • Use a Mini PC instead of a raspberry pi (outlined in our docs)
5 Likes

One of my theories (I’ve yet to test) is placing a flashlight next to the camera aimed at the target. The increased contrast will allow a lower exposure and improve performance. How much? Who knows.

2 Likes

OP, you actually have not said what settings you are using. As @mdurrani834 points out, there are a few tuning parameters, and they make a big difference. Make sure to have the latest version and try:
resolution = 640x480 (presumably that is what you are using)
decimate = 2 (ā€œ1ā€ should be the lowest value on the slider, and means no decimation)
threads = 2

Decimate = 2 reduces the processing by ~ 4x and threads = 2 gives roughly 2x.

1 Like

From some roughly-analogous at home experiments with exposure, the primary gain there is for cameras which don’t have very sensitive sensors, and for which ā€œnormal-lookingā€ images require exposure times that artificially limit the framerate. Bright illumination, in some cheaper cameras, helps greatly in keeping framerate up.

However, the relative performance improvement of reducing background noise seems pretty small.

This is what I’ve done since my basement is relatively dim. I’m looking forward to data from a real gym that I think are all brighter than my room. I did have a little problem with sunlight directly on the target. You’d think that would be better but sometimes it’s not. And do we really want all those bright white flashlights replacing the green ones?

Sunlight can be so bright it can wash out the black, especially on papers and inks that are more glossy than matte. They talked about the targets being printed on a matte vinyl so hopefully the glare from the gym lights doesn’t wash it out too much.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.