Team 2056 OP Robotics Reveal

That’s cool. Re-writing probably gives good practice for students too. I’m always impressed with the lengths the top teams will go to get that extra competitive edge.

I also want to give 2056 and you credit for answering questions about your 2020 robot here. You’ve built an inspiring robot - I hope we get to see it on the field soon!


Completely copying large chunks of code is probably best avoided regardless. In 2018 we found some bugs from the original 2013 code we’ve been reusing; rewriting is a much better practice if you have the manpower.



The close shot is ~30 degree release angle at ~5000 rpm. The Far shot is ~20 degree release angle at 6500 rpm. The Control Panel shot was also ~20 degrees at ~7200 rpm. The release angle was determined empirically. The exact angle the ball releases from the shooter is always a bit of black magic.


What is 2056’s method for trajectory generation and path following?


How much compression did you put on the ball when shooting in these high speeds? Most teams I saw peaked at around 6000RPM even behind the DJ booth, so I guess you went on the lower side (<1") in compression?

As a mentor for another team that built a turreted shooter that can shoot on the fly, I’m curious about some of the details of how you implemented tracking. I’m very familiar with using robot velocity and/or driver inputs to estimate feedforward voltages for the turret to help remain on-target during aggressive maneuvers (we did this in 2019, too).

First question: In the slides on Twitch it said you used motion magic for the turret. Did you really use MM for tracking on the fly? MM always tries to achieve the setpoint with zero velocity, so in cases where the “steady state” still requires turret motion (such as a “drive-by”), my experience has been that the tracking has been jerky. We had better results with plain old PID (and arbitrary feedforward).

Second question: Do you attempt to “lead” the target at all (accounting for the robot velocity vector being added to the ball velocity vector)? Or did you find this wasn’t necessary? (Our implementation of this was a work-in-progress when we lost access to our robot…)


Amazing robot! How do you accomplish shooting while moving? Is it something along the lines of getting your pose relative to the target, and then adjusting based on odometry, or is there more to it than that?

1 Like


We actually don’t do any trajectory generation or path following in the conventional sense. We use a pretty simplistic drive controller that’s similar to a pure pursuit controller. 1114 has some very good sample code posted that’s very similar.


We had about 2" of compression. With this type of ball (2020, 2016, 2012, 2006) and a wheeled shooter, my rule of thumb is start at 25% compression and iterate from there.


We are in fact using Motion Magic on the turret. The accel is fairly aggressive however. The relative turret rotational velocity is actually very low when doing an on the fly shot like what we demonstrated in our auto. In our experience, motion magic was more than fast enough to keep up(we also never actually tried anything else). Our control loop isn’t capable of maintaining on-target doing really aggressive driving maneuvers, but in the use case of a turreted shooter, are you really going to be firing under those conditions?

We are leading or trailing the target based on robot linear velocity, rotational velocity. This is what makes the on the fly shot possible. The exact implementation, we’ll keep to ourselves for now. Adjusting for the rotational velocity is not really necessary. How often do you actually spin in circles while shooting? Not very often, but it does look really cool when you do it and the balls actually go in. I wish we had video of that. We just never took any and like many others, we’re currently locked out of our facility.


We’ve tried both, and are using Position control for Vision Tracking, and Motion Magic for all the other modes of the turret. We tried and it also worked well in Motion Magic mode, but we found that Position control has a slightly better results.
In our eyes, the reason isn’t only the fact that the robot is moving, but a combination of that and Target info’s update time (camera values updates slower then internal encoders/gyro for example).