View Full Version : Team 254 Presents: FRC 2016 Code
maxwelly
09-10-2016, 20:32
Hi everyone, Team 254 is happy to share the code base for our 2016 robot, Dropshot.
This year, for the first time, we provide detailed code documentation in the README file and in each Java class. We explain each class and its interactions with other components. In this repository, we also include the code for the CheesyVision computer vision app for Android.
Robot Code: https://github.com/Team254/FRC-2016-Public
thatprogrammer
09-10-2016, 20:49
Excellent code!
Just one immediate question:Why did you chose to just post this now despite this being public on your GitHub repo for quite some time now? :]
dirtbikerxz
09-10-2016, 22:18
COOL! Thanks for the CODE guys! Definitely useful too look at. Now all we need are the CAD files. :D Feel free to pm it to me any time :P .
Anyway, good job on this guys.
Bkeeneykid
09-10-2016, 22:32
I can't wait to give this to our programmers to ogle at. Awesome work every time, 254!
Poseidon5817
09-10-2016, 23:35
Excellent code!
Just one immediate question:Why did you chose to just post this now despite this being public on your GitHub repo for quite some time now? :]
Same, I read all their code weeks ago :D
Cothron Theiss
10-10-2016, 00:08
Excellent code!
Just one immediate question:Why did you chose to just post this now despite this being public on your GitHub repo for quite some time now? :]
Same, I read all their code weeks ago :D
It looks like some of the CheezDroid code was updated recently. They may have wanted to do some final cleaning and get everything out there before doing the CD release.
Now all we need are the CAD files.
...
We have no plans to release the CAD of the robot but are happy to answer any questions you have.
I'd love to look through their CAD files, but I can understand why they don't release those.
kylelanman
10-10-2016, 00:15
The work you did around kinematics, poses, and latency compensation was truly remarkable and inspiring. Dropshot was my favorite robot to watch and the best designed robot for Stronghold in my opinion.
I do have one question. Why were JNI and the C++ OpenCV libs used oppose to the Java OpenCV libs? If I recall correctly Jared said at the world championship talk that the performance between the C++, Java, and python OpenCV libs were essentially the same.
Jared Russell
10-10-2016, 01:40
I do have one question. Why were JNI and the C++ OpenCV libs used oppose to the Java OpenCV libs? If I recall correctly Jared said at the world championship talk that the performance between the C++, Java, and python OpenCV libs were essentially the same.
3 reasons:
1) We found that of the various methods available for grabbing and decoding an image out of the camera buffer, this was the fastest.
2) It let us minimize memory allocation and buffer copying in the most performance-critical part of the code.
3) We already had a working C++ vision prototype for the Tegra, so all we had to do was copy and paste.
Turing'sEgo
10-10-2016, 01:58
This is an extremely large project in the scope of FRC (but excellently done). How does the team handle development? Do you follow a development methodology? I assume you have multiple students working on various parts at the same time, how does version control work? Check out with merge requests?
Non-sequitur question: Where do you programmers ultimately end up beyond frc? Year after year quality code is released (and has been since at least 2011), and the knowledge your graduated programmers must have is well beyond that of your typical freshman cs student.
Jared Russell
10-10-2016, 14:27
This is an extremely large project in the scope of FRC (but excellently done). How does the team handle development? Do you follow a development methodology? I assume you have multiple students working on various parts at the same time, how does version control work? Check out with merge requests?
Non-sequitur question: Where do you programmers ultimately end up beyond frc? Year after year quality code is released (and has been since at least 2011), and the knowledge your graduated programmers must have is well beyond that of your typical freshman cs student.
It's pretty informal. We all worked on our various pieces and commit to head/email patches as we go (occasionally there's a long-running branch, but we try to avoid that). Typically in the lab or at competition, we had one person on a laptop and push all of the changes at the end of the night (and we take another computer and find a quiet spot to iron out a problem in parallel if it takes more time, then merge). We did occasionally have build breakages or regressions (most of our students are new to git) but nothing that was ever particularly impossible to iron out.
I can't really answer the second question very well - I have only worked with the team for 3 years so all of the programmers are either still on the team or in college. Perhaps someone else can speak to history before that. More generally, I can speak to the team's approach to programming the robot for the last 3 years. Programming an entire 254 robot to the level of performance the team demands is a challenge for experienced software engineers, let alone high school students (and on our team at least, it seems that the more capable and brilliant the student, the more other demands are made on their time by school, the team, or other activities). We try to divvy up tasks among the team based on interest, ability, and time commitment; both students and mentors make direct contributions to the code.
For younger students, it is expected that by and large their job is to learn, be self-starting/pursue additional learning opportunities, and make small contributions as they are able. The more experienced students should take ownership (or at least, co-ownership with a mentor) over some area of the code. For example, in 2014 we had a student (now in college) take ownership over our autonomous spline-following routine (deriving all the math and peer-programming the implementation with me). He definitely graduated high school knowing more about robot navigation than most college graduates. Similarly, a student last year made great contributions to our vision algorithm; he now knows more about computer vision than most college students.
Most of the programming students from last year are returning this season (and some of the mentors are stepping aside), so I'm looking forward to seeing what they do next year!
Thank you guys for your great contribution to the FRC programming community.
We can all learn from you team:)
Tom Bottiglieri
10-10-2016, 14:53
A quick note on the vision app and the motivation behind using Android.
We started the year under the mindset that we could build a protected zone shooter, but quickly realized with some prototypes and strategy sessions there was serious value to be had by building a small robot that could both go under the bar and shoot from anywhere near the tower. We knew this would require a very good vision system for our robot and got to work trying to make something to run on the NVIDIA Jetson board. This board proved to be very capable of processing frames (in fact, the best performance we got all year was an early prototype running on this board), but had some issues with power up/down reliability. We debated using a computer with a battery built in, but settled on Android because it was cheaper and "cooler".
The app was designed to work well on the hardware we selected for the robot (Nexus 5), but we have seen weird bugs on other devices. For instance, the framerate is worse and the picture is upside down on my Nexus 5X. I'm sure there is a perfectly reasonable cause for this, we just haven't felt the need to fix bugs for platforms that weren't on our robot. If you find bugs in the app or make it work on a new platform, feel free to submit a pull request and our students will review it.
Eugene Fang
10-10-2016, 17:07
the picture is upside down on my Nexus 5X.
Apparently the camera module in the 5X is upside down due to packaging reasons. There's a software flag that apps are supposed to read to get camera orientation, but many (like the Augmented Reality feature in eDrawings) don't do it right...
How were you guys able to calculate the traction?
Jared Russell
20-10-2016, 00:12
How were you guys able to calculate the traction?
I'm not sure what you're referring to? We did have a "traction control" mode that used closed-loop speed feedback along with a gyro to cross defenses while remaining straight, but this didn't require calculating traction.
How do you check your position in auto after you crossed a defense like the moat, where your wheels might be turning more than you're actually moving? Or did you not run into that problem?
Jared Russell
20-10-2016, 11:10
How do you check your position in auto after you crossed a defense like the moat, where your wheels might be turning more than you're actually moving? Or did you not run into that problem?
1) we went slowly enough that worst case slip was limited.
2) we used closed-loop velocity control on the wheels to ensure that even if one side of the drive train momentarily lost traction, we didn't suddenly lose a ton of encoder ticks.
3) in the end, we didn't need to be that precise - our auto-aim could make the shot from anywhere in the courtyard, and on the way back we either just used a conservative distance (for our one-ball mode) or used a reflective sensor to ensure we didn't cross the center tape (for our two-ball modes).
oh ok so your "traction control" was making sure that your robot remained straight; by any chance what was the logic behind making the robot stay straight.
Jared Russell
20-10-2016, 15:26
oh ok so your "traction control" was making sure that your robot remained straight; by any chance what was the logic behind making the robot stay straight.
Yeah - the logic for staying straight is here (https://github.com/Team254/FRC-2016-Public/blob/master/src/com/team254/frc2016/subsystems/Drive.java#L385). There's a PID controller that compares our actual heading to the desired heading, and adjusts the desired velocities of the left and right sides of the drive accordingly.
I'm not sure what you're referring to? We did have a "traction control" mode that used closed-loop speed feedback along with a gyro to cross defenses while remaining straight, but this didn't require calculating traction.
What gyro did you guys use? I saw the Spartan Board on your bot at Chezy Champs, so I assume that you used the ADXRS453 that is built in to it, but I didn't have a chance to get a closer look.
Jared Russell
20-10-2016, 15:42
What gyro did you guys use? I saw the Spartan Board on your bot at Chezy Champs, so I assume that you used the ADXRS453 that is built in to it, but I didn't have a chance to get a closer look.
Yes.
kiettyyyy
21-10-2016, 02:40
I can't really answer the second question very well - I have only worked with the team for 3 years so all of the programmers are either still on the team or in college. Perhaps someone else can speak to history before that.
I might not be the best person to answer this as well, however, I have had one of their students end up as one of my interns within Qualcomm's Corporate R&D group back in 2013.
He ended up being one of my top interns, able to keep up with the PhD candidates in programming, control theory and hardware concepts... without taking a single college course.
gerthworm
21-10-2016, 08:55
Awesome job guys!
I saw in previous years you had a robot-hosted website interface for status & tuning, although I didn't see that this year. I might just be missing it.... Assuming I'm not, did you have a reason for not carrying that forward?
Jared Russell
21-10-2016, 11:12
Awesome job guys!
I saw in previous years you had a robot-hosted website interface for status & tuning, although I didn't see that this year. I might just be missing it.... Assuming I'm not, did you have a reason for not carrying that forward?
If we had infinite time, I'm sure we would have. Instead, we used a combination of SmartDashboard / our own web interface (https://github.com/Team254/FRC-2016-Public/tree/master/installation/logger) for Network Tables for monitoring feedback, and used the Talon SRX configuration page for adjusting most of our gains.
ranlevinstein
21-10-2016, 15:22
Thank you guys for sharing your amazing code!
I have a couple of questions:
Why did you choose to follow a path instead of a trajectory during auto this year?
Why did you choose the adaptive pure pursuit controller instead of other controllers?
gerthworm
21-10-2016, 16:32
If we had infinite time, I'm sure we would have. The struggle is real. Nice, thanks again for posting and answering questions! I always love going through your stuff, so many great and well implemented ideas!
Jared Russell
22-10-2016, 01:05
Why did you choose to follow a path instead of a trajectory during auto this year?
Great question! I assume you are referring to (my) definition of path vs. trajectory from the motion profiling talk (these definitions are hardly universal).
Path: An ordered list of states (where we want to go, and in what order). Paths are speed-independent.
Trajectory: A time-indexed list of states (at each time, where we want to be). Because each state needs to be reached at a certain time, we also know get a desired speed implicitly (or explicitly depending on your representation).
In 2014 and 2015, our controllers followed trajectories. In 2016, our drive followed paths (the controller was free to determine its own speed). Why?
Time-indexed trajectories are planned assuming you have a pretty good model of how your robot will behave while executing the plan. This is useful because (if your model is good), your trajectory contains information about velocity, acceleration, etc., that you can feed to your controllers to help follow it closely. This is also nice because your trajectory always takes the same amount of time to execute. But if you end up really far off of the trajectory, you can end up with weird stuff happening...
With a path only, your controller has more freedom to take a bit of extra time to cross a defense, straighten out the robot after it gets cocked sideways, etc. This helps if you don't have a good model of how your robot is going to move - and a pneumatic wheeled robot climbing over various obstacles is certainly hard to model.
Why did you choose the adaptive pure pursuit controller instead of other controllers?
Simplicity. A pure pursuit controller is basically a P-only controller on cross track error, but somewhat easier to tune. Adaptive pure pursuit is sort of akin to a PD controller (the only difference is a fudge factor in how far you look ahead). If you only have a day to get auto mode working, and the robot is being repaired up on a table while you are coding, then pure pursuit requires very little time to get tuned once you are back on the floor :)
apache8080
26-10-2016, 22:15
Thanks for all of the great resources.
I had a few questions on your vision code:
Are you guys calculating distance from the goal to adjust the hood? If so, how?
If you guys would have used the Jetson TX1, would you have considered using the ZED stereocamera from Stereolabs (https://www.stereolabs.com/zed/specs/)?
Jared Russell
26-10-2016, 23:58
Are you guys calculating distance from the goal to adjust the hood? If so, how?[/URL]?
Yep. I'll point you to a few places in the code that help explain how.
First, in the Android app, we find the pixel coordinates corresponding to the center of the goal:
https://github.com/Team254/FRC-2016-Public/blob/master/vision_app/app/src/main/java/com/team254/cheezdroid/VisionTrackerGLSurfaceView.java#L131
...and then turn those pixel coordinates into a 3D vector representing the "ray" shooting out of the camera towards the target. The vector has an x component (+x is out towards the goal) that is always set to 1; a y component (+y is to the left in the camera image); and a z component (+z is up). This vector is unit-less, but the ratios between x, y, and z define angles relative to the back of the phone. The math behind how we create this vector is explained here (https://en.wikipedia.org/wiki/Pinhole_camera_model).
The resulting vector is then sent over a network interface to the RoboRIO. The first interesting place it is used is here:
https://github.com/Team254/FRC-2016-Public/blob/master/src/com/team254/frc2016/RobotState.java#L187
In that function, we turn the unit-less 3D vector from the phone into real-world range and bearing. We can measure pitch (angle above the plane of the floor) by using our vector with some simple trig; same thing for yaw (angle left/right). Since we know where the phone is on the robot (from CAD, and from reading the sensors on our turret), we can compensate for the fact that the camera is not mounted level, and the turret may be turned. Finally, we know how tall the goal should be (and how high the camera should be), so we can use more trigonometry to use our pitch and yaw angles to determine distance. We feed these values into a tracker (which smooths out our measurements by averaging recent goal detections that seem to correspond to the same goal).
The final part is to feed our distance measurement (and bearing) into our auto-aiming code. We do this here:
https://github.com/Team254/FRC-2016-Public/blob/master/src/com/team254/frc2016/subsystems/Superstructure.java#L718
Notice that we use a function to convert between distance and hood angle. This function was tuned (many times throughout the season) by putting the robot on the field, shooting a bunch of balls from a bunch of different spots, and manually adjusting hood angle until the shots were optimized for each range. We'd record the angles that worked best, and then interpolate between the two nearest recorded angles for any given distance we want to shoot from.
Jared Russell
27-10-2016, 00:00
If you guys would have used the Jetson TX1, would you have considered using the ZED stereocamera from Stereolabs (https://www.stereolabs.com/zed/specs/)?
Probably not. Stereo was totally unnecessary for estimating range to the goal last year; we were able to estimate our distance to within a few inches using only the method described in the previous post.
Mastonevich
27-10-2016, 09:38
Love the program you have created and the professionals you produce.
Great quotes.
Programming an entire 254 robot to the level of performance the team demands is a challenge for experienced software engineers, let alone high school students ......both students and mentors make direct contributions to the code.
I might not be the best person to answer this as well, however, I have had one of their students end up as one of my interns within Qualcomm's Corporate R&D group back in 2013.
He ended up being one of my top interns, able to keep up with the PhD candidates in programming, control theory and hardware concepts... without taking a single college course.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.