![]() |
Re: Team 254 Presents: FRC 2016 Code
How do you check your position in auto after you crossed a defense like the moat, where your wheels might be turning more than you're actually moving? Or did you not run into that problem?
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
2) we used closed-loop velocity control on the wheels to ensure that even if one side of the drive train momentarily lost traction, we didn't suddenly lose a ton of encoder ticks. 3) in the end, we didn't need to be that precise - our auto-aim could make the shot from anywhere in the courtyard, and on the way back we either just used a conservative distance (for our one-ball mode) or used a reflective sensor to ensure we didn't cross the center tape (for our two-ball modes). |
Re: Team 254 Presents: FRC 2016 Code
oh ok so your "traction control" was making sure that your robot remained straight; by any chance what was the logic behind making the robot stay straight.
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
He ended up being one of my top interns, able to keep up with the PhD candidates in programming, control theory and hardware concepts... without taking a single college course. |
Re: Team 254 Presents: FRC 2016 Code
Awesome job guys!
I saw in previous years you had a robot-hosted website interface for status & tuning, although I didn't see that this year. I might just be missing it.... Assuming I'm not, did you have a reason for not carrying that forward? |
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
Re: Team 254 Presents: FRC 2016 Code
Thank you guys for sharing your amazing code!
I have a couple of questions: Why did you choose to follow a path instead of a trajectory during auto this year? Why did you choose the adaptive pure pursuit controller instead of other controllers? |
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
Re: Team 254 Presents: FRC 2016 Code
Quote:
Path: An ordered list of states (where we want to go, and in what order). Paths are speed-independent. Trajectory: A time-indexed list of states (at each time, where we want to be). Because each state needs to be reached at a certain time, we also know get a desired speed implicitly (or explicitly depending on your representation). In 2014 and 2015, our controllers followed trajectories. In 2016, our drive followed paths (the controller was free to determine its own speed). Why? Time-indexed trajectories are planned assuming you have a pretty good model of how your robot will behave while executing the plan. This is useful because (if your model is good), your trajectory contains information about velocity, acceleration, etc., that you can feed to your controllers to help follow it closely. This is also nice because your trajectory always takes the same amount of time to execute. But if you end up really far off of the trajectory, you can end up with weird stuff happening... With a path only, your controller has more freedom to take a bit of extra time to cross a defense, straighten out the robot after it gets cocked sideways, etc. This helps if you don't have a good model of how your robot is going to move - and a pneumatic wheeled robot climbing over various obstacles is certainly hard to model. Quote:
|
Re: Team 254 Presents: FRC 2016 Code
Thanks for all of the great resources.
I had a few questions on your vision code: Are you guys calculating distance from the goal to adjust the hood? If so, how? If you guys would have used the Jetson TX1, would you have considered using the ZED stereocamera from Stereolabs? |
Re: Team 254 Presents: FRC 2016 Code
Quote:
First, in the Android app, we find the pixel coordinates corresponding to the center of the goal: https://github.com/Team254/FRC-2016-...View.java#L131 ...and then turn those pixel coordinates into a 3D vector representing the "ray" shooting out of the camera towards the target. The vector has an x component (+x is out towards the goal) that is always set to 1; a y component (+y is to the left in the camera image); and a z component (+z is up). This vector is unit-less, but the ratios between x, y, and z define angles relative to the back of the phone. The math behind how we create this vector is explained here. The resulting vector is then sent over a network interface to the RoboRIO. The first interesting place it is used is here: https://github.com/Team254/FRC-2016-...tate.java#L187 In that function, we turn the unit-less 3D vector from the phone into real-world range and bearing. We can measure pitch (angle above the plane of the floor) by using our vector with some simple trig; same thing for yaw (angle left/right). Since we know where the phone is on the robot (from CAD, and from reading the sensors on our turret), we can compensate for the fact that the camera is not mounted level, and the turret may be turned. Finally, we know how tall the goal should be (and how high the camera should be), so we can use more trigonometry to use our pitch and yaw angles to determine distance. We feed these values into a tracker (which smooths out our measurements by averaging recent goal detections that seem to correspond to the same goal). The final part is to feed our distance measurement (and bearing) into our auto-aiming code. We do this here: https://github.com/Team254/FRC-2016-...ture.java#L718 Notice that we use a function to convert between distance and hood angle. This function was tuned (many times throughout the season) by putting the robot on the field, shooting a bunch of balls from a bunch of different spots, and manually adjusting hood angle until the shots were optimized for each range. We'd record the angles that worked best, and then interpolate between the two nearest recorded angles for any given distance we want to shoot from. |
Re: Team 254 Presents: FRC 2016 Code
Quote:
|
| All times are GMT -5. The time now is 17:53. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi