# Orbit 1690's PoseTracking and Shooting on the Move software session

Was this the shooting motor’s tangential velocity or something else?

How accurate was this simulation to real life?

How did you translate between the shot velocity and the tangential velocity of the flywheel?

Edit: didn’t refresh the page, sniped on pt 2 by 2 hrs

1 Like

How did you record match video from the limelight?

1 Like

As we’ve said in the session, the first time we ran the simulation and inputed the computed values to shoot according to it, it worked. Also, we’ve used it for the whole season so you can see a couple of our matches yourself to measure how accurate it is

We set the out velocity as the tangental vel of the wheels times some constant to account for slipping, we had it on a constant value at the start of each competition and decreased it as the competition went on and the balls weared out

We have the camera stream on our dashboard and we screen record the dashboard every single game in order to have more info to debug with

2 Likes

How did you tune/measure that constant?

We see if the shooter over/under shoots and we tune the factor accordingly. The factor was always really close to 1. We also had a variable that our 2nd driver can adjust during the match in order to tune a bit the shots

1 Like

What time step did you use?

I think 0.01 seconds

1 Like

Does the simulation work on the 2d or 3d plane (accounting also for directional velocity in the “y” plane, rather than only away and from the speaker)?

1 Like

For 2024 season, for 2022 season

BTW now checked and this is true

For the 2022 season - I didn’t fully understand from the recording. Do you only calculate different distances with different velocities on a 2d plane (xz, relative to the hub), and then account for the y (relative to the hub) in the robot code using a different turret angle?

3 Likes

Do you guys use WPI’s swerve helper classes or have you made your own custom library?

If you’re using your own custom swerve code, what’s special about it?

How did you determine the amount of offset in the turret relative to the tangential velocity?

Do you mind sharing the paper you referenced regarding tennis balls?

1 Like

Yeah, I guess you’re right. But I’m always concerned that code changes will never work unless you tune it on a real practice field and with a real robot, maybe it’s just me.
Log replay does not respect the fact that your outputs will go back to influence your inputs. When you tune your vision, quite often you run into the “dirty vision data cycle”, where the dirty vision data makes your chassis PID

shake, which then causes more dirty vision data because your camera’s shaking. This effect is not quite simulated if you are using the replay data from a logged match.

The utility of log replay depends a lot on what you’re trying to accomplish, and it’s important to set appropriate goals when using it. With the right application it can be incredibly powerful, but it’s not a magic bullet that works in every scenario. Also keep in mind that one of the main utilities of log replay is recording data that was not logged in the first place; this does not involve changing code logic at all, just adding additional telemetry.

Making changes to code logic is most useful when looking at a small part of the code, with a focused goal. In the example you gave, one place where log replay might be helpful is adjusting your latency compensation algorithm to reduce the noise caused by robot movements. The log will always show the robot shaking (because that’s what happened), but you can still ensure that your improved algorithms produce more realistic estimates under tough conditions. This is also a good example of how log replay can be taken too far — tuning a PID

controller inherently involves a feedback loop between outputs and inputs (that’s the point), so it’s not generally useful to tune PID
gains in log replay. However, modern robot code involves lots of complex control logic whose components can be inspected and tuned separately in replay.

5 Likes

That was clear and helpful, thanks a lot
I just have the habit of always testing my code on real robots, this does not contradict my opinion on AdvantageKit: a truly amazing work and useful tool for programmers.

4 Likes

Thank you for hosting these presentations. They were very informative! After watching the second recording, I was comparing your team’s algorithm to ours. I don’t recall you mentioning accounting for the latency of how long it takes for the game piece to leave the robot. That is, when the game piece is shot, it takes some time (not much, but not zero) to leave the robot. The robot will be in a different position when the game piece leaves the robot compared to where it was when the shoot command was executed. Did your algorithm accommodate this latency or did you find that it wasn’t significant?

1 Like

the shooter (pitch, velocity, yaw with turret or swerve) continuously adjust the exit vector of the ball. Every cycle it receives a new vector to follow and moves accordingly. Actually… it is always in a position that the ball should hit the target if it was shot now. So, we don’t really care about this latency. practically, the shooter doesn’t need to “know” if/when a ball is shot. the shooter does its thing - changing pitch, yaw and velocity in a way that ball will hit the target and when a ball arrives it just exits at the CURRENT exit velocity vector set by the shooter. it doesn’t matter when (how long in the past) the command to shoot was given since the shooter behavior is always the same regardless if there is a ball in the system or not

2 Likes

In you pose tracking algorithm, you said that increasing a sensor’s FOM will “trust” the sensor reading less, is it the same or close to increasing Standard Deviations in a Kalman Filter model? Can you achieve something similar by just doing that?