We are still tuning our cameras to find the ideal settings a little, and are curious where among the field fellow PhotonVision users are comfortable with their readings.
We’ve noticed to read targets around midfield we had to go to a fairly large resolution. Which unfortunately drastically drops our FPS and makes the vision system unreliable.
I’d be currious what teams are running for resolutions and other settings and around what kind of FPS are they getting. At this point I’m trying to avoid reinventing the wheel!
1 Like
A low framerate should not make your system unreliable. You should be storing your poses and timestamps so you can go back when you get the photon vision pose, apply the photon pose at -t, then add your motion since then to determine your new current pose.
1 Like
well, yeah… I think were seeing a good bit of noise in our readings at distance which is probably the larger issue. WPILIB Pose Estimator will already handle the timestamp latency part
1 Like
Have you played with the ambiguity to try to remove false positives?
nah, we should. Are yous aying from photonVision or from the Pose Estimator?
I believe you can adjust it in photon vision itself. We use LabVIEW, but the library allowed us to adjust it in our robot code. Since it’s a WPIlib port i would assume you have the same ability.
There is no way to “play with the ambiguity”, probably confusing it with creating an ambiguity threshold for when tags should be considered or not. Higher resolution, lower decimation will give better results at long distances. This also requires a well lit area and flat tags.
1 Like
We probably should drop our decimation.
Do you have any recommendations on what resolutions to run? we were running a very high res and still seeing some noise. THink it was one of the largest res…1024 by …, we use a pi4 and a pretty performant usb camera. 5 Fps or so
Sounds like more time should be spent tuning on our end!
Hard to make general reccomendations, would just play it by ear. Really, you shouldn’t even be trusting the vision measurements as much as you get farther from the tags.
yeah was just looking at my standard deviations into the pose estimator and realized we were pulling it down from the wpilib default of .9 to .3,so thats part of the issue, at that distance I’m guessing odometry should be running the show
In an ideal world, your STDEVs scale based on tag count and distance from your tag. X should be trusted more than Y (at least this year).
yeah, I don’t love how I’d determine a tags distance. Looks like I would have to take my previous robot pose, measure the distance of my new estimate pose from the photonEstimator, and scale it that way. Feels like its a fast track to compound a bad error but maybe not
Here’s a datapoint based on some very early trials. Using photon vision on an N5105 and two OV9281 USB cameras at 1280x800. While we can detect the up to around where the game pieces we don’t trust the pose past the community. Getting around 15-20fps at 50-80ms latency.
When I saw we don’t trust the data past the community, its not that its noisy, but rather a decent amount of the time the returned pose is off by around 6-8in or so with very little noise.
1 Like
yeah I think I might ignore a lot beyond the community. If I filter by area that’ll help with that.
Which is fine, Odometry should keep me tuned up when we are on the dark side of the moon in auto (the like ~10 feet in between the community and the mid field line)
1 Like
I can get the latest one in a few days, here is the backup from Saturday of the photon_config folder. Cameras are percy and gordon.
1 Like
Are you sure the camera focus hasn’t gotten out of what it was when it calibrated? The calibrated images themselves look a little fuzzy, and if they get out of the original focus you can see the consistent “offset” from the actual distances.
1 Like
do you have any reliable way of doing this? I’ve struggled with the distance.
i can take my distance my pose estimator previously thought i was at and measure what the new estimator thinks?