The variable drop center tho…
Here’s the YouTube archive of the episode:
The 2056 segment starts at around the 1 hour mark.
Is team 2056’s code available somewhere, or are you planning to publicly release code soon?
In the turret portion you mentioned accounting for skew. Most teams I work with, when shooting this year, aimed for the center of the outer port/vision target. Did 2056 use angle relative to the goal to offset the target and aim for the 3pt goal?
They don’t typically release their code and on the stream, I believe one of their members said that they wouldn’t be releasing it this year.
At this time, we do not have plans to release our code or CAD. FRC has evolved over the last decade into much more of a software competition as readily available cots components have become much more common. We feel our software development represents a significant portion of our competitive advantage on the field and we wish to retain that.
If you have any specific questions about the machine or software, feel free to ask.
Wow! Amazing machine. Got a few questions - how much compression did you put on the balls? What are your release angles? And what are the 3 RPMs used for the shooter?
I don’t want to start a fight, but I think openness is important to the mission of FIRST. R15 says “Software and mechanical/electrical designs created before Kickoff are only permitted if the source files (complete information sufficient to produce the design) are available publicly prior to Kickoff.” With 2056’s no-release policy, can we assume you do not re-use any code year over year?
IMO, let’s try and keep this thread to the reveal of their robot Legacy.
Follow the rules?
The answer is absolutely yes unless you have a valid reason to think otherwise
No problem at all. We often look to our previous years robot code as reference for examples, algorithms, calculations, ways of structuring functionality.
R15 prevents copying large sections of unmodified code over into the control software of the new ROBOT. It doesn’t mean you can’t look at previous years code for reference.
The turret tracking is something we are very proud of and spent a great deal of time developing. When we have another shooting game in the future, we will be sure to look at this years software for reference.
That’s cool. Re-writing probably gives good practice for students too. I’m always impressed with the lengths the top teams will go to get that extra competitive edge.
I also want to give 2056 and you credit for answering questions about your 2020 robot here. You’ve built an inspiring robot - I hope we get to see it on the field soon!
Completely copying large chunks of code is probably best avoided regardless. In 2018 we found some bugs from the original 2013 code we’ve been reusing; rewriting is a much better practice if you have the manpower.
The close shot is ~30 degree release angle at ~5000 rpm. The Far shot is ~20 degree release angle at 6500 rpm. The Control Panel shot was also ~20 degrees at ~7200 rpm. The release angle was determined empirically. The exact angle the ball releases from the shooter is always a bit of black magic.
What is 2056’s method for trajectory generation and path following?
How much compression did you put on the ball when shooting in these high speeds? Most teams I saw peaked at around 6000RPM even behind the DJ booth, so I guess you went on the lower side (<1") in compression?
As a mentor for another team that built a turreted shooter that can shoot on the fly, I’m curious about some of the details of how you implemented tracking. I’m very familiar with using robot velocity and/or driver inputs to estimate feedforward voltages for the turret to help remain on-target during aggressive maneuvers (we did this in 2019, too).
First question: In the slides on Twitch it said you used motion magic for the turret. Did you really use MM for tracking on the fly? MM always tries to achieve the setpoint with zero velocity, so in cases where the “steady state” still requires turret motion (such as a “drive-by”), my experience has been that the tracking has been jerky. We had better results with plain old PID (and arbitrary feedforward).
Second question: Do you attempt to “lead” the target at all (accounting for the robot velocity vector being added to the ball velocity vector)? Or did you find this wasn’t necessary? (Our implementation of this was a work-in-progress when we lost access to our robot…)
Amazing robot! How do you accomplish shooting while moving? Is it something along the lines of getting your pose relative to the target, and then adjusting based on odometry, or is there more to it than that?
We actually don’t do any trajectory generation or path following in the conventional sense. We use a pretty simplistic drive controller that’s similar to a pure pursuit controller. 1114 has some very good sample code posted that’s very similar.
We had about 2" of compression. With this type of ball (2020, 2016, 2012, 2006) and a wheeled shooter, my rule of thumb is start at 25% compression and iterate from there.
We are in fact using Motion Magic on the turret. The accel is fairly aggressive however. The relative turret rotational velocity is actually very low when doing an on the fly shot like what we demonstrated in our auto. In our experience, motion magic was more than fast enough to keep up(we also never actually tried anything else). Our control loop isn’t capable of maintaining on-target doing really aggressive driving maneuvers, but in the use case of a turreted shooter, are you really going to be firing under those conditions?
We are leading or trailing the target based on robot linear velocity, rotational velocity. This is what makes the on the fly shot possible. The exact implementation, we’ll keep to ourselves for now. Adjusting for the rotational velocity is not really necessary. How often do you actually spin in circles while shooting? Not very often, but it does look really cool when you do it and the balls actually go in. I wish we had video of that. We just never took any and like many others, we’re currently locked out of our facility.
We’ve tried both, and are using Position control for Vision Tracking, and Motion Magic for all the other modes of the turret. We tried and it also worked well in Motion Magic mode, but we found that Position control has a slightly better results.
In our eyes, the reason isn’t only the fact that the robot is moving, but a combination of that and Target info’s update time (camera values updates slower then internal encoders/gyro for example).