|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#106
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
One of the things that makes 971 unique is that we are willing and able to move significant mechanical complexity into software. 2014 and 2015 were good examples of this. 2016 wasn't as crazy from a software point of view, but we also have gotten good at making robots work. We have strong software mentors and students with experience in controls and automation along with strong mechanical mentors and students to build designs good enough to be accurately controlled. The mechanical bits make the software easy (or at least possible). We tend to start with a list of requirements for the robot, quickly decide what we know is going to work from previous years (and actually start designing it in parallel), and then sketch out concept sketches of the rest. We do a lot of big picture design with blocky mechanisms and sketches to work out geometry and the big questions. We then start sorting out all the big unknowns in that design, and working our way from there to a final solution. I think that a lot of what makes us different is that we impose interesting constraints on our robots, and then work really hard to satisfy them. We come up with most of our most out of the box ideas while trying to figure out how to make the concept sketch have the CG, range of motion, tuck positions, and speeds that we want before we get to the detailed design. In 2014, we wanted to build something inspired by 1114 in 2008. We put requirements on that to be able to quickly intake the ball and also be able to drive around with the claw inside our frame perimeter ready to grab the ball. We tend to get to the requirements pretty quickly, and then start figuring out the best way to achieve them. After a bunch of challenging packaging questions, someone proposed that rather than using a piston to open and close the claw, we could just individually drive the two claws with motors. That brakethrough let us make our packaging requirements work much easier, and ended up being a pretty unique design. That got us comfortable building robots with more and more software. In 2015, we were pretty confident that we didn't need to cut into the frame perimeter. Unfortunately, by the time we had determined that that requirement was hurting us, we had already shipped the drive base. We spent a lot of time working through various alternatives to figure out how to stack on top of the drive base and then place the stack. In the end, the only way we could make it work was what you saw. We knew that we were very close on weight, and again used software to allow us to remove the mechanical coupling between the left and right sides to reduce weight. We were confident from our 2014 experience that we could make that work. I like to tell people that our 2015 robot was very good execution of the wrong strategy... We wouldn't build it again, so maybe you guys are all smarter than us there ![]() 2016 fell together much easier than I expected. It had more iteration than we've ever done before (lots of V2 on mechanisms), which helps it look polished. Honestly, most of the hard work in 2016 was in the implementation, not the concept. We wanted a shooter that shot from high up, and the way to do that was to put it on an arm. We are getting to the point where we have a lot of knowledge built up around what fails in these robots and what to pay attention to. That part has just taken a lot of time and a lot of hard work. We don't spend much time debating where to do low backlash gearboxes, or figuring out how to control or sense various joints. Sometimes, I think we design the robots we design because we over-think problems and then come up with solutions to them. We work through a lot of math for gearbox calculations, power usage, etc, and do some basic simulations on some of the more critical subsystems. We also do a gut check to make sure that we think the subsystems will work when we build them, and we have good enough prototypes to prove out anything we are uncertain about. |
|
#107
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
|
|
#108
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Can you reveal the secret of how you dealt with the changing compression of the boulders?
|
|
#109
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.
|
|
#110
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
We do have two cameras. There was an early thought to use them to do stereo (do separate target recognition in both cameras, and get depth from the offset distance), and we had a bench prototype of this that had good preliminary results. We ended up not needing accurate depth info, so the two cameras were just used for finding the center of the goal. We could have done that with one camera mounted centered, but that is easier said than done. We were using the jetson for vision processing and were happy with its performance. |
|
#111
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
![]() As Travis said, we wanted to do stereo, but didn't get around to verifying that it worked well enough to start using the distance that it reported. One of the side effects of stereo cameras was that we didn't need to deal with the transforms required to deal with the camera not being centered. Our shooter didn't have any space above or below the ball for a camera. The bottom of the shooter rested on the bellypan, and the top just cleared the low bar. We did the shape detection on the Jetson TK1, and passed back a list of U shapes found to the roboRIO over UDP in a protobuf, including the coordinates of the 4 corners for each camera. We didn't find that we needed to do color thresholding, just intensity thresholding, and then shape detection. This ran at 20 hz, 1280x1024 (I think), all on the CPU. The roboRIO then matched up the targets based on the angle of the bottom of the U. We were very careful to record the timestamps through the system. We recorded the timestamp that v4l2 reported that the image was received by the kernel, the timestamp at which it was received by userspace on the Jetson, the timestamp it was sent to the roboRIO and the timestamp that the processed image was received on the roboRIO. The let us back out the projected time that the image was captured on the Jetson in the roboRIO clock within a couple ms. We then saved all the gyro headings over the last second and the times at which they were measured, and used those two pieces of data to interpolate the heading when the image was taken, and therefore the current heading of the target. This, along with our well tuned drivetrain control loops, let us stabilize to the target very quickly. Ask any follow-on questions that you need. |
|
#112
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
|
|
#113
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
I have a few questions about your control. 1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands. 2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form? 3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error? 4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer? 5.How did you tune the Q and R matrices in the kalman filter? 6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors? 7.How did you create 2 dimensional trajectories for your robot during auto? 8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense? Thank you very much! ![]() |
|
#114
|
|||||||
|
|||||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Delta-U won't work if you command the motors with U, since your model doesn't match your plant (off by an integral). I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand. If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state". d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1]. We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing. Quote:
Quote:
Honestly, I end up tuning it one way and then looking at the poles directly at the end to see how the tuning affected the results. Quote:
Quote:
The equation to minimize is: (B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n))) This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there. The end result was that our model defined the feed-forwards constants, so it was free We also were able to gain schedule the feed-forwards terms for free as well.FYI, this was the first year that we did feed-forwards. Before, we just relied on the controllers compensating. You can see it in some of the moves in the 2015 robot where it'll try to do a horizontal move, but end up with a steady state offset while moving due to the lack of feed-forwards. Quote:
Quote:
Awesome questions, keep them coming! I love this stuff. |
|
#115
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Are the A and B matrices here the same as in this pdf? https://www.chiefdelphi.com/forums/a...1&d=1419983380 Quote:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing? Quote:
This is really smart! I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct? Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward? Can you explain how did you observe the voltage? Quote:
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it. Who teaches your students all this stuff? My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice? Thank you very much! |
|
#116
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Quote:
The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that. You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier. Quote:
That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible. You can mathematically prove that the observer can observe the voltage as long as you tune it correctly. This is called observability, and can be calculated from some matrix products given A and C. For most controls people, that is enough of an explanation ![]() Intuitively, you can think of the observer estimating where the next sensor reading should be, measuring what it got, and then attributing the error to some amount of error in each state. So, if the position is always reading higher than expected, it will slowly squeeze the error into the voltage error term, where it will finally influence the model to not always read high anymore. You'll then have a pretty good estimate of the voltage required to do what you are currently doing. Quote:
We have a small number of students who get really involved in the controls on 971. Some years are better than others, but that's how this type of thing goes. There is a lot of software on a robot that isn't controls. I'm going to see if the paper actually gets good results, and then work to involve students to see if we can fix some of the shortcomings with the model that one of the examples in the paper uses and help make improvements. I think that'll let me simplify the concept somewhat for them and get them playing around with the algorithm. I've yet to try to involve students in something this mathematically challenging, so I'll know more once I've pulled it off... I mostly mention the paper as something fun that you can do with controls in this context. When I get the proper amount of interest and commitment, I sit down with the student and spend a significant amount of time teaching them how and why state space controllers work. I like to do it by rederiving some of the math to help demystify it, and having them work through examples. I've had students take that knowledge pretty far and do some pretty cool things with it. Teaching someone something this tricky is a lot of work. We tend to have about 1 student every year actually take the time to succeed. Sometimes more, sometimes less. Doing model based controls without good help can be tricky. I honestly recommend most of the time to focus on writing test cases with simulations with more simple controllers (PID, for example) before you then start looking at model based controls. This gets you ready for what you need to do for more complicated controllers, and if you were to stop there having learned dependency injection and testing, that would already be an enormous success. The issue is that most of this stuff is upper division college level material, and is sometimes graduate level material. Take a subsystem on your robot, and try to write a model based controller for it over the off-season. |
|
#117
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
acceleration = a * velocity + b * (voltage error) + b * u, where a and b are constants. I am a bit confused about why this is true because the voltage error is in volts and u is volts/seconds so you are adding numbers with different units. Quote:
u = constant * position error + constant * velocity error + constant * integral of voltage error Maybe there is a problem with velocity error part here but I still don't understand how there won't be an integral windup when you have integral of position error in your controller. What am I missing? Also I saw you are using moment of inertia of what being spun in your model, What units is it and how can I find it? Quote:
I was wondering about how the delta-u controller works when the u command get's higher than 12 volts, because then you can't control the rate of change of the voltage anymore. Thank you so much! Your answers helped my team and me a lot! ![]() |
|
#118
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Awesome robot - as always. |
|
#119
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
u = (B^T *Q*B)^-1 * (r(n+1)^T - r(n)^T * A^T)*Q*B Is that correct? Last edited by ranlevinstein : 16-06-2016 at 16:16. Reason: Forgot to put transpose in the beginning of the expression. |
|
#120
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Most of our backlash reduction comes from eliminating hex backlash. We also do the standard run as large a chain reduction as you can on the last stage, and then keep this chain tensioned well. Going forward we are going to be using #35 whenever we can for these reductions to avoid stiffness issues, which also helps with the controls. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|