|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: Team 254 Presents: FRC 2016 Code
This is an extremely large project in the scope of FRC (but excellently done). How does the team handle development? Do you follow a development methodology? I assume you have multiple students working on various parts at the same time, how does version control work? Check out with merge requests?
Non-sequitur question: Where do you programmers ultimately end up beyond frc? Year after year quality code is released (and has been since at least 2011), and the knowledge your graduated programmers must have is well beyond that of your typical freshman cs student. |
|
#2
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
I can't really answer the second question very well - I have only worked with the team for 3 years so all of the programmers are either still on the team or in college. Perhaps someone else can speak to history before that. More generally, I can speak to the team's approach to programming the robot for the last 3 years. Programming an entire 254 robot to the level of performance the team demands is a challenge for experienced software engineers, let alone high school students (and on our team at least, it seems that the more capable and brilliant the student, the more other demands are made on their time by school, the team, or other activities). We try to divvy up tasks among the team based on interest, ability, and time commitment; both students and mentors make direct contributions to the code. For younger students, it is expected that by and large their job is to learn, be self-starting/pursue additional learning opportunities, and make small contributions as they are able. The more experienced students should take ownership (or at least, co-ownership with a mentor) over some area of the code. For example, in 2014 we had a student (now in college) take ownership over our autonomous spline-following routine (deriving all the math and peer-programming the implementation with me). He definitely graduated high school knowing more about robot navigation than most college graduates. Similarly, a student last year made great contributions to our vision algorithm; he now knows more about computer vision than most college students. Most of the programming students from last year are returning this season (and some of the mentors are stepping aside), so I'm looking forward to seeing what they do next year! Last edited by Jared Russell : 10-10-2016 at 14:30. |
|
#3
|
|||
|
|||
|
Re: Team 254 Presents: FRC 2016 Code
Thank you guys for your great contribution to the FRC programming community.
We can all learn from you team ![]() |
|
#4
|
|||
|
|||
|
Re: Team 254 Presents: FRC 2016 Code
A quick note on the vision app and the motivation behind using Android.
We started the year under the mindset that we could build a protected zone shooter, but quickly realized with some prototypes and strategy sessions there was serious value to be had by building a small robot that could both go under the bar and shoot from anywhere near the tower. We knew this would require a very good vision system for our robot and got to work trying to make something to run on the NVIDIA Jetson board. This board proved to be very capable of processing frames (in fact, the best performance we got all year was an early prototype running on this board), but had some issues with power up/down reliability. We debated using a computer with a battery built in, but settled on Android because it was cheaper and "cooler". The app was designed to work well on the hardware we selected for the robot (Nexus 5), but we have seen weird bugs on other devices. For instance, the framerate is worse and the picture is upside down on my Nexus 5X. I'm sure there is a perfectly reasonable cause for this, we just haven't felt the need to fix bugs for platforms that weren't on our robot. If you find bugs in the app or make it work on a new platform, feel free to submit a pull request and our students will review it. |
|
#5
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
Apparently the camera module in the 5X is upside down due to packaging reasons. There's a software flag that apps are supposed to read to get camera orientation, but many (like the Augmented Reality feature in eDrawings) don't do it right...
|
|
#6
|
|||
|
|||
|
Re: Team 254 Presents: FRC 2016 Code
How were you guys able to calculate the traction?
|
|
#7
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
I'm not sure what you're referring to? We did have a "traction control" mode that used closed-loop speed feedback along with a gyro to cross defenses while remaining straight, but this didn't require calculating traction.
|
|
#8
|
||||
|
||||
|
Re: Team 254 Presents: FRC 2016 Code
How do you check your position in auto after you crossed a defense like the moat, where your wheels might be turning more than you're actually moving? Or did you not run into that problem?
|
|
#9
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
2) we used closed-loop velocity control on the wheels to ensure that even if one side of the drive train momentarily lost traction, we didn't suddenly lose a ton of encoder ticks. 3) in the end, we didn't need to be that precise - our auto-aim could make the shot from anywhere in the courtyard, and on the way back we either just used a conservative distance (for our one-ball mode) or used a reflective sensor to ensure we didn't cross the center tape (for our two-ball modes). |
|
#10
|
||||
|
||||
|
Re: Team 254 Presents: FRC 2016 Code
Thank you guys for sharing your amazing code!
I have a couple of questions: Why did you choose to follow a path instead of a trajectory during auto this year? Why did you choose the adaptive pure pursuit controller instead of other controllers? |
|
#11
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
Path: An ordered list of states (where we want to go, and in what order). Paths are speed-independent. Trajectory: A time-indexed list of states (at each time, where we want to be). Because each state needs to be reached at a certain time, we also know get a desired speed implicitly (or explicitly depending on your representation). In 2014 and 2015, our controllers followed trajectories. In 2016, our drive followed paths (the controller was free to determine its own speed). Why? Time-indexed trajectories are planned assuming you have a pretty good model of how your robot will behave while executing the plan. This is useful because (if your model is good), your trajectory contains information about velocity, acceleration, etc., that you can feed to your controllers to help follow it closely. This is also nice because your trajectory always takes the same amount of time to execute. But if you end up really far off of the trajectory, you can end up with weird stuff happening... With a path only, your controller has more freedom to take a bit of extra time to cross a defense, straighten out the robot after it gets cocked sideways, etc. This helps if you don't have a good model of how your robot is going to move - and a pneumatic wheeled robot climbing over various obstacles is certainly hard to model. Quote:
![]() |
|
#12
|
|||
|
|||
|
Re: Team 254 Presents: FRC 2016 Code
Thanks for all of the great resources.
I had a few questions on your vision code: Are you guys calculating distance from the goal to adjust the hood? If so, how? If you guys would have used the Jetson TX1, would you have considered using the ZED stereocamera from Stereolabs? |
|
#13
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
First, in the Android app, we find the pixel coordinates corresponding to the center of the goal: https://github.com/Team254/FRC-2016-...View.java#L131 ...and then turn those pixel coordinates into a 3D vector representing the "ray" shooting out of the camera towards the target. The vector has an x component (+x is out towards the goal) that is always set to 1; a y component (+y is to the left in the camera image); and a z component (+z is up). This vector is unit-less, but the ratios between x, y, and z define angles relative to the back of the phone. The math behind how we create this vector is explained here. The resulting vector is then sent over a network interface to the RoboRIO. The first interesting place it is used is here: https://github.com/Team254/FRC-2016-...tate.java#L187 In that function, we turn the unit-less 3D vector from the phone into real-world range and bearing. We can measure pitch (angle above the plane of the floor) by using our vector with some simple trig; same thing for yaw (angle left/right). Since we know where the phone is on the robot (from CAD, and from reading the sensors on our turret), we can compensate for the fact that the camera is not mounted level, and the turret may be turned. Finally, we know how tall the goal should be (and how high the camera should be), so we can use more trigonometry to use our pitch and yaw angles to determine distance. We feed these values into a tracker (which smooths out our measurements by averaging recent goal detections that seem to correspond to the same goal). The final part is to feed our distance measurement (and bearing) into our auto-aiming code. We do this here: https://github.com/Team254/FRC-2016-...ture.java#L718 Notice that we use a function to convert between distance and hood angle. This function was tuned (many times throughout the season) by putting the robot on the field, shooting a bunch of balls from a bunch of different spots, and manually adjusting hood angle until the shots were optimized for each range. We'd record the angles that worked best, and then interpolate between the two nearest recorded angles for any given distance we want to shoot from. |
|
#14
|
|||||
|
|||||
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
|
#15
|
|||
|
|||
|
Re: Team 254 Presents: FRC 2016 Code
oh ok so your "traction control" was making sure that your robot remained straight; by any chance what was the logic behind making the robot stay straight.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|