Behind the Bumpers - 604 Quixilver - Particle Filtering Demo

https://youtu.be/NOO_pLSVpb4 FRC Team 604 Quixilver really lives up to their name with their quick cycle and climb time that earned them a division win at championships. Listen in as they describe their particle filtering feature, unique color sensing strategy and climber iterations.

12 Likes

Where is this particle filtering used? It was implied that it was being used for shooting while moving but I don’t see how it helps all that much because even in the short demo given, there was still quite a bit of inaccuracy. They also said it was running on the driver station and the latency that might have might diminish some of the benefits it has. I might be misinterpreting what I was seeing but i’m not really sure what the use case is here.

Either way, it’s a really cool concept and I wonder how teams might start using simulations like these in future years!

I agree, it did seem to drift quite rapidly. I believe they said they use it for automating their aiming so that they can aim even if they are facing away from the hub. I don’t remember them mentioning shooting while moving.

1 Like

Hm, i’ve seen teams do a similar thing without this complex code. Seems cool but you can also just turn back to approximately face the hub and then use your limelight from there. I recall 3-4 teams during the Idaho regional using this strategy.

I think there is probably something deeper going on with this.

That would make sense. Probably for a proof of concept if they want to do something more complex next year, maybe if there is a pick and place game.

1 Like

The particle filter is used for auto-aiming and shoot-on-the-move; all targeting-related control loops are closed using the latency-compensated estimated position. The robot’s position via odometry drifts over time and we tackle this issue by using the particle filter to fuse the bearing and elevation to the goal reported by the Limelight with our odometry to improve the estimated position of our robot on the field whenever we see the hub. In the FUN interview, the robot had a lot of drift on the particle filter because we drove the simulated robot back and forth before and during the interview to show that odometry alone is not sufficient to estimate the position of the robot and that vision is needed. Here is an example of how much error we get in a real match: FRC604 Localization Overlay | 2022cc qm57 - YouTube. Since auto-aiming and shoot-on-the-move require the robot to be facing the hub anyway, some drift on the particle filter doesn’t matter to us when we are not scoring. However, it’s still useful to have a rough estimate of where we are on the field so the shooter can start ramping to the correct speed even before the goal comes into view.

They also said it was running on the driver station and the latency that might have might diminish some of the benefits it has.

Since the particle filter is run on the driver station laptop, we account for network latency by overlaying the real-time odometry from the RIO on top of the possibly outdated particle filter position to get the best real-time estimate of our location.

11 Likes

That’s an interesting approach. Why did you choose to do it that way rather than doing the vision pose estimation on the robot and using the pose estimator class to merge it into the kalman filter?

1 Like

Having to manually turn to the goal is both slower and requires more effort by the driver. Additionally, closing the targeting loop using the latency-compensated field-relative position results in tighter control loops compared to only using the Limelight output. It also enables the robot to understand how it is both translating and rotating relative to the goal, which is important because shoot-on-the-move requires the robot to not just know if it is pointed at the goal, but also how it is moving with respect to it.

3 Likes

In order to use the WPILib pose estimator class, you need the global pose measurement and the corresponding covariance in pose-space (x, y, theta). This means taking the Limelight bearing/elevation output and doing some trigonometry that relies on the gyro. There are two problems with this:

  1. The true uncertainty of the global pose measurement can’t be captured by a gaussian distribution in pose-space; the true measurement uncertainty (approximated by a gaussian) is in pixel-space, but the nonlinear transform from pixel-space to pose-space makes it both non-gaussian and change as a function of distance from the goal. For example, the farther away you are from the goal the larger the global pose measurement uncertainty should be, since large changes in distance from the target only result in small changes in the height of the target in the image.

  2. Using the gyro as part of the trigonometry makes the estimated global pose highly sensitive to gyro drift, although it doesn’t really matter this year since the goal is rotationally symmetric.

Using a particle filter solves (1) by letting us directly incorporate measurements in a mathematically correct way with uncertainty defined in pixel-space. With only one target we are still susceptible to (2), but in a year with multiple targets, you can use this method to cancel out gyro drift as well. We thought about tracking individual retroreflective tape sections on the target this year, but quickly decided it wasn’t worth our time.

The effect of the problems above are quite small and don’t matter much for this year’s game (a lot of teams used the WPILib pose estimator, or even simpler methods, with great success). However, this software was developed to solve the general problem of field-relative localization for FRC, to work with any game including those with multiple targets. There are other benefits (and downsides) to particle filters that I won’t dive too deeply into here.

P.S. Also I personally think it’s easier for high school students to understand and implement a particle filter from scratch vs. understanding how an Unscented Kalman Filter works.

10 Likes

I didn’t mean manually turn to the goal. Even if odometery drifts a lot it’s approximate position should still be good enough to get the hub into the limelight’s FOV. I see your point though, this makes sense.

Hmm makes sense! Will your team publish its code? I’m really curious to see how the particle filter is run on your driver station and how the robot uses it.

I’ll second this. And by high school students, I also mean me.

Either way, super cool stuff 604.

1 Like

Yes, we publish our code on our GitHub after we get a chance to clean it up.

1 Like

Thank you!