We have gotten PhotonVision to work with the new AprilTags and want to increase the frame rate from 16FPS. Any tips on tuning it to achieve a better performance? Would it be worthwhile to buy a external TPU?
Try using a lower resolution or lowering the exposure. Also are you using a PI cam or a webcam?
Have you tried overclocking the PI at all? Iāve been using an ArgonOne case to get the CPU cool enough to push it to 2GHz.
That got me to 20-30fps at 640x480 and IIRC 800x600 wasnāt too shabby. Recorded my results here.
Definitely get the exposure as low as possible so that itās processing as many black pixels as possible. Thereās some question of whether using a higher resolution with decimate of 2 might be a workable solution with respect to the distance you want to achieve.
Edit - the Amazon link that Iāve posted at PhotonVision takes you to a different camera (3.6mm) now. Thereās a 100deg option thatās cheaper ($28 vs $49) and a different chip (OV9712 vs AR0144), so itās untested AFAIK, but a similar camera can be found here.
LifeCam right now. I was trying to avoid lowering resolution because that would translate to lost distance when detecting the AprilTag
Weāre not overclocking it at all right now.
If you think that the PI4 is what youāre going to use, it might be worth looking into. Google around a bit to see how to do it properly, taking it up a bit at a time. Use āvgencmd measure_tempā and āvcgencmd measure_clock armā to check your results, also the PhotonVision Settings tab will display that info, usually. Iām using over_voltage=6, arm_freq=2000, gpu_freq=750, but YMMV. Watch for CPU throttling in PV. I had to try two different PIs to get that to work. Iām also using a separate battery to power the PI to avoid power-off issues. https://a.co/5XSYJBV
Honestly, with timestamped (somewhat, will get better with nt4) results, 16hz is fine for use within a Kalman filter like wpilibs SwervePoseEstimator and DifferentialDrivePoseEstimator classes. Odometry will fill in well for the timespan in between the readings.
While providing a stable tool for teams to use in 2023 is our top priority, we are also looking into ways to increase performance for teams using PhotonVision. Like others said above, the current performance is definitely useable given how all of the data is timestamped and can be accounted for when using a pose estimator (this will be explicitly shown in our examples when they come out before kickoff).
Possible performance improvements include GPU acceleration for Raspberry Pis, a new AprilTag implementation, OpenCV optimizations, and general performance improvements. Given the complex nature of these changes (and our desire for stability), there isnāt any confirmed date of release (or any confirmation as to if these will make it to release as we donāt want to rush in a half-baked solution that would ultimately harm user experience.) The best way to ensure that these features get added is to contribute, test, and/or to file issues on our GitHub.
Somethings that you can do right now:
- Stream to your dashboard in as low of a resolution as possible
- Use a PiCamera instead of a LifeCam. LifeCam overall just isnāt a great choice for performance.
- Exposure should go as low as it can possibly go while still detecting targets
- Use as many threads as your platform supports
- Overclock your pi (this is not an official recommendation as it can mess things up, just something that people do)
- Use a Mini PC instead of a raspberry pi (outlined in our docs)
One of my theories (Iāve yet to test) is placing a flashlight next to the camera aimed at the target. The increased contrast will allow a lower exposure and improve performance. How much? Who knows.
OP, you actually have not said what settings you are using. As @mdurrani834 points out, there are a few tuning parameters, and they make a big difference. Make sure to have the latest version and try:
resolution = 640x480 (presumably that is what you are using)
decimate = 2 (ā1ā should be the lowest value on the slider, and means no decimation)
threads = 2
Decimate = 2 reduces the processing by ~ 4x and threads = 2 gives roughly 2x.
From some roughly-analogous at home experiments with exposure, the primary gain there is for cameras which donāt have very sensitive sensors, and for which ānormal-lookingā images require exposure times that artificially limit the framerate. Bright illumination, in some cheaper cameras, helps greatly in keeping framerate up.
However, the relative performance improvement of reducing background noise seems pretty small.
This is what Iāve done since my basement is relatively dim. Iām looking forward to data from a real gym that I think are all brighter than my room. I did have a little problem with sunlight directly on the target. Youād think that would be better but sometimes itās not. And do we really want all those bright white flashlights replacing the green ones?
Sunlight can be so bright it can wash out the black, especially on papers and inks that are more glossy than matte. They talked about the targets being printed on a matte vinyl so hopefully the glare from the gym lights doesnāt wash it out too much.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.