After a very busy season for my team, I decided to try and improve my code so that when I graduate our programmers will have well written code to look back to. After I finished removing most of the spaghetti code I wrote in the pits I wanted to do more. During the season I tried (and failed) to get vision based shooting for our robot. One of the problems was that I couldn’t find any good examples to look at. Before I start writing the actual code I want to make sure I understood the math behind vision based shooting. Both for my understanding and everyone else’s I made a graph in Desmos with a visual example of how it works.

I notice two problems. The first is that you are assuming the trajectory of the note will be completely straight, which it wont.

The second is that nearly every FRC team just uses an interpolation table, where they determine different shooter angles that work in different increments of, say, 1ft, and then linearly interpolate between them for any given distance from the goal. You can do this with something like an `InterpolatingDoubleTreeMap`

, included in WPILib.

The flight path was something I thought about and I’m hoping that I can add a bias term to account for any drop. If you think that wouldn’t work, how would you calculate a parabola for the note?

Also, I’m assuming that an interpolation table will work better with more data, which isn’t really option right now. Some teams may have the resources to finish their robot early and have the time for something like that, but mine usually doesn’t. I will definitely try it for the off-season where I have the testing time, but might have to hold of from using that during the season.

If you know of any other techniques that don’t require as much testing time, I would love to hear them. Thanks for the tips!

This looks like it would bounce right back down, but there’s *a chance* the note gets deflected by the top.

How does it fare in the real world?

In my experience, an interpolation table with just a few decent data points can be just as good as a mathematical model.

I spent the better part of build season experimenting with Desmos trajectory sims (shoutout to Team 95’s build blog for some helpful info) with the same idea as you and I think I spent about as much time trying to tune the simulation I had created to the robot as I did creating an interpolation table. With a defined procedure and a bit of practice, extremely accurate and reliable interpolation tables can be made in under an hour (and I bet I could make a serviceable one on a competition practice field in less than half an hour). I will say that making sims helped me speed up the interpolation data gathering process, as I knew what the shooter angle/distance graph was supposed to look like and I could “fill in the dots” between data points.

I’m not entirely sure how to do that as I dont know a ton of physics, but I’m pretty sure it should be fairly simple kinematics equations. You will need to know exit velocity of the note for that, which you can assume to be a bit less than the surface speed of you shooter rollers, due to not having 100% efficiency.

Here’s a Desmos calculator for ballistics with air drag. As complicated as this looks, it is still making a lot of simplifying assumptions, especially for a flying torus like a Note rather than a sphere. Disregarding these simplifications that contribute to inaccuracy, you still have a the problem of finding the actual exit velocities of the Notes based on shooter speed. Unless you are doing something unusual, actual exit velocity is less than theoretical exit velocity based on shooter wheel speed.

An interpolation table IS the fast, low resource way to set up your shooter. If you have time to run your robot, you can shoot from a series of distances and dial in shooter speed and launch angle. We put in a new interpolation table at our first regional in a 15 minute practice field slot when we found that the new Notes being used at the event flew differently than the used Notes we calibrated with at home.

Trying to tune a theoretical model to give performance equal to or better than an interpolation table is going to take longer. You still need the on-field time to take a range of shots and modify the model parameters to get accurate results. That’s the best case scenario if you happen to start with a model that is appropriate enough to be tuned. If you have to modify the model form because no tuning gives you accurate results at all distances, you are looking at way more development time than just experimentally determining an interpolation table with 6-10 points.

As mentioned before, the problem with this graph is assuming that the note will be shot at a straight line.

Another problem is that this won’t allow for shooting while moving.

The reason the note isn’t shot in a straight line, is because of gravity (mostly, we found other forces don’t have much effect here). Here is a desmos example for the physical calculation that takes the shooting wheel speeds (which is the tangential velocity of the wheels, maybe multipled by some coefficient if you see you’re losing some speed), and finds the optimal angle to shot with, considering the note is affected by gravity.

Then, we put all of this into a 3d vector, we subtract the robot’s velocity vector, and we get a new vector that compensates for shooting while moving.

This is our code.

Note that our solution does require full field localization. But either way, I do recommend having full field localization.

Thanks for the advice! I had no idea that you needed so few data points to make an interpolation work accurately. The Desmos graph is also easy enough to understand because of how well documented it it.

In the interest of time I only made the height of the speaker, the size/angle of the opening, and the height of the Apriltag to scale. The rest of the speaker I just eyeballed and is not dimensioned correctly. Sorry for the confusion.