Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Robot Showcase (http://www.chiefdelphi.com/forums/forumdisplay.php?f=58)
-   -   FRC971 Spartan Robotics 2016 Release Video (http://www.chiefdelphi.com/forums/showthread.php?t=146113)

AustinSchuh 01-06-2016 02:06

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by mr.roboto2826 (Post 1590359)
How did you guys go about the development of your 2 (side by side) wheeled shooter? From the outside looking in it looks fairly simple, but I would assume hours of tweaking went into it. Could you enlighten us as to how the shooter went through its development stages?

We have a couple pictures on our Picassa page, but the story helps tie them together.

We did a bunch of CAD sketches to try to determine packaging. We quickly discovered that to make it all package like we wanted, a side-by-side wheel shooter helped a lot. That info was then fed to the prototyping team, and they set to work.

The first job was to build a CIM + wheels module. By random chance, we picked 1 CIM and 2 wheels. Based on some old math, we picked a reasonably high surface speed, and machined the module on the router.

We then attached the module to a bunch of 8020, and started doing a parameter sweep.



The obvious parameters (for us) to try were compression, and surface speed. Once the shooter was making shots into the goal reliably, we started collecting ball landing position data. They were using a pulse counter on the Fluke multimeter and reflectance sensor to count RPM to dial the RPM in accurately for the tests.



As you increase compression, there is a compression level where the spread shrinks, and then starts to increase. We picked that point.

The prototyping team didn't the standard deviation was quite good enough, so they kept working at it. Someone came up with the idea to offset the wheels vertically to put a little spin on the ball. That reduced the spread far enough that they were happy with it, and we shipped it with those numbers exactly. We haven't had to touch any of the shooter parameters all season, which is wonderful.

mr.roboto2826 01-06-2016 11:16

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1590553)
The prototyping team didn't the standard deviation was quite good enough, so they kept working at it. Someone came up with the idea to offset the wheels vertically to put a little spin on the ball. That reduced the spread far enough that they were happy with it, and we shipped it with those numbers exactly. We haven't had to touch any of the shooter parameters all season, which is wonderful.

Would you be able to go in depth a little more as to your prototyping process and iteration? I noticed in the photo you had a rail on the bottom of your shooter, but your final design had none. Also could you explain the pneumatic setup on your feeder into the shooter?

Travis Schuh 01-06-2016 21:16

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by Greg Woelki (Post 1590241)
How do you go about manufacturing the oversized shafts? Also, how tight are the interference fits? Hand tight? Arbor press tight?

We shoot for what I think is described as a transitional fit (+- a couple tenths off of nominal). We used to just sneak up on it and test fit it, but this year we got access to an optical comparator at a sponsor's lab and measured the hex size on the 1/2 hex VP broach to be ~ .5045 and the 3/8 hex VP broach to be ~.3765 (I would verify your own numbers if you choose a similar route). We ended up needing a light press to assemble, but it was easy to remove the parts when we did end up having to change a ratio. One trick we found was that it is important to put a radius on the edges of the hex to account for the radius on the broach. The numbers we cut to should be in our CAD.

We machined them using a 5C collet fixture on a HAAS mini mill (PN 3174A42 on McMaster or equivalent). This puts the shaft vertically. We machined it primarily using with a 2" LOC 1/2"D carbide endmill, and designed the shafts so that they could fit in two setups of this. We started with 5/8 precision ground 7075. We would start with a blank of known length (so we could use collet stops), machine one side, flip it over and hold onto some section that was either previously machined or a section that was designed to be left un-machined, and then machine the back side. Last year we started using slitting saws to put the snap ring groves in during the same operation, and that has been a nice time saver. Another trick we learned is it is really worth while to have a finisher endmill that you use just for the finish pass so that you don't end up with the bottom of each profile being a little tight due to the end of the endmill being more worn. We haven't had any issues of runout, although it is a concern.

Overall, I estimate that we spend 1/2 to 3/4 of a day machining all the critical shafts for both robots. The setup and CAM changeover goes relatively quick once the first part is setup. We find the time savings on the design and fiddle factor to be worth the effort.

Gregor 02-06-2016 01:28

Re: FRC971 Spartan Robotics 2016 Release Video
 
Your last 3 robots have been extremely ambitious, unique, and well done. Why are your robots so different, what are you doing that's different than other teams? During brainstorming when someone proposes something like your 2015 robot everyone would throw it out immediately. How do encourage such unique thinking, and when you move into the design process how do you know you'll be able to pull off your crazy robots?

AustinSchuh 02-06-2016 01:53

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by mr.roboto2826 (Post 1590574)
Would you be able to go in depth a little more as to your prototyping process and iteration? I noticed in the photo you had a rail on the bottom of your shooter, but your final design had none.

The rail in our prototype was to help feed the ball in consistently. It was added quickly after hand-feeding proved to be not reliable enough.

We started by controlling every variable possible. Once we had something that performed well enough, we started looking at the CAD and what the final solution was going to look like. We then measured the prototype and tried to make the model match. Whenever we found a place where we wanted the model to diverge from the prototype, we tweaked the prototype to match the proposed design and re-ran our tests. Our final design has a plate holding the ball until it is grabbed by the wheels, which serves the same purpose. We pulled the bar back until it was just about as long as the final design wanted, and verified that it still worked.

Quote:

Originally Posted by mr.roboto2826 (Post 1590574)
Also could you explain the pneumatic setup on your feeder into the shooter?

That piston linkage alone took an entire weekend of work. Open up the model and take a look. The left-right spacing is killer, especially since you want a ~3 pound grabbing force.

This year was different in that you didn't need to shoot multiple balls. We chose to use that by designing a shooter to hold one ball very securely and load it very consistently. That pointed us towards designing something which grabbed the ball in a "cage" with pistons and linkages, and some sort of piston loading mechanism. After a couple conceptual iterations in discussions, someone proposed having 2 links where the links were driven relative to each other to grab and release the ball, and the pair of links rotated together to feed the ball into the flywheels. We very carefully worked through all the geometry in solidworks sketches, figured out all the components, and then worked on finalizing the design. We tried at least 4 different piston models before we found a piston that would work.

I'm not sure I explained the pistons the best. Ask for clarification where I wasn't clear enough.

AustinSchuh 03-06-2016 00:47

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by Gregor (Post 1590761)
Your last 3 robots have been extremely ambitious, unique, and well done. Why are your robots so different, what are you doing that's different than other teams? During brainstorming when someone proposes something like your 2015 robot everyone would throw it out immediately. How do encourage such unique thinking, and when you move into the design process how do you know you'll be able to pull off your crazy robots?

That's a hard one to answer, kind of like trying to provide a recipe for innovation...

One of the things that makes 971 unique is that we are willing and able to move significant mechanical complexity into software. 2014 and 2015 were good examples of this. 2016 wasn't as crazy from a software point of view, but we also have gotten good at making robots work. We have strong software mentors and students with experience in controls and automation along with strong mechanical mentors and students to build designs good enough to be accurately controlled. The mechanical bits make the software easy (or at least possible).

We tend to start with a list of requirements for the robot, quickly decide what we know is going to work from previous years (and actually start designing it in parallel), and then sketch out concept sketches of the rest. We do a lot of big picture design with blocky mechanisms and sketches to work out geometry and the big questions. We then start sorting out all the big unknowns in that design, and working our way from there to a final solution. I think that a lot of what makes us different is that we impose interesting constraints on our robots, and then work really hard to satisfy them. We come up with most of our most out of the box ideas while trying to figure out how to make the concept sketch have the CG, range of motion, tuck positions, and speeds that we want before we get to the detailed design.

In 2014, we wanted to build something inspired by 1114 in 2008. We put requirements on that to be able to quickly intake the ball and also be able to drive around with the claw inside our frame perimeter ready to grab the ball. We tend to get to the requirements pretty quickly, and then start figuring out the best way to achieve them. After a bunch of challenging packaging questions, someone proposed that rather than using a piston to open and close the claw, we could just individually drive the two claws with motors. That brakethrough let us make our packaging requirements work much easier, and ended up being a pretty unique design. That got us comfortable building robots with more and more software.

In 2015, we were pretty confident that we didn't need to cut into the frame perimeter. Unfortunately, by the time we had determined that that requirement was hurting us, we had already shipped the drive base. We spent a lot of time working through various alternatives to figure out how to stack on top of the drive base and then place the stack. In the end, the only way we could make it work was what you saw. We knew that we were very close on weight, and again used software to allow us to remove the mechanical coupling between the left and right sides to reduce weight. We were confident from our 2014 experience that we could make that work. I like to tell people that our 2015 robot was very good execution of the wrong strategy... We wouldn't build it again, so maybe you guys are all smarter than us there ;)

2016 fell together much easier than I expected. It had more iteration than we've ever done before (lots of V2 on mechanisms), which helps it look polished. Honestly, most of the hard work in 2016 was in the implementation, not the concept. We wanted a shooter that shot from high up, and the way to do that was to put it on an arm.

We are getting to the point where we have a lot of knowledge built up around what fails in these robots and what to pay attention to. That part has just taken a lot of time and a lot of hard work. We don't spend much time debating where to do low backlash gearboxes, or figuring out how to control or sense various joints. Sometimes, I think we design the robots we design because we over-think problems and then come up with solutions to them. We work through a lot of math for gearbox calculations, power usage, etc, and do some basic simulations on some of the more critical subsystems. We also do a gut check to make sure that we think the subsystems will work when we build them, and we have good enough prototypes to prove out anything we are uncertain about.

AirplaneWins 07-06-2016 19:17

Re: FRC971 Spartan Robotics 2016 Release Video
 
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?

Schroedes23 09-06-2016 13:47

Re: FRC971 Spartan Robotics 2016 Release Video
 
Can you reveal the secret of how you dealt with the changing compression of the boulders?

Travis Schuh 10-06-2016 01:16

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by Schroedes23 (Post 1592116)
Can you reveal the secret of how you dealt with the changing compression of the boulders?

We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.

Travis Schuh 10-06-2016 01:24

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AirplaneWins (Post 1591856)
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?

I can't quite speak for our vision team on all of the implementation, but I can fill in some high level details.

We do have two cameras. There was an early thought to use them to do stereo (do separate target recognition in both cameras, and get depth from the offset distance), and we had a bench prototype of this that had good preliminary results. We ended up not needing accurate depth info, so the two cameras were just used for finding the center of the goal. We could have done that with one camera mounted centered, but that is easier said than done.

We were using the jetson for vision processing and were happy with its performance.

AustinSchuh 12-06-2016 01:32

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AirplaneWins (Post 1591856)
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?

Sorry for the delayed response. Life got in the way of robots again :rolleyes:

As Travis said, we wanted to do stereo, but didn't get around to verifying that it worked well enough to start using the distance that it reported. One of the side effects of stereo cameras was that we didn't need to deal with the transforms required to deal with the camera not being centered. Our shooter didn't have any space above or below the ball for a camera. The bottom of the shooter rested on the bellypan, and the top just cleared the low bar.

We did the shape detection on the Jetson TK1, and passed back a list of U shapes found to the roboRIO over UDP in a protobuf, including the coordinates of the 4 corners for each camera. We didn't find that we needed to do color thresholding, just intensity thresholding, and then shape detection. This ran at 20 hz, 1280x1024 (I think), all on the CPU. The roboRIO then matched up the targets based on the angle of the bottom of the U.

We were very careful to record the timestamps through the system. We recorded the timestamp that v4l2 reported that the image was received by the kernel, the timestamp at which it was received by userspace on the Jetson, the timestamp it was sent to the roboRIO and the timestamp that the processed image was received on the roboRIO. The let us back out the projected time that the image was captured on the Jetson in the roboRIO clock within a couple ms. We then saved all the gyro headings over the last second and the times at which they were measured, and used those two pieces of data to interpolate the heading when the image was taken, and therefore the current heading of the target. This, along with our well tuned drivetrain control loops, let us stabilize to the target very quickly.

Ask any follow-on questions that you need.

AustinSchuh 12-06-2016 01:34

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by Travis Schuh (Post 1592210)
We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.

This was also helped by our prototyping team spending significant time figuring out which compression seemed to have the least shot variation. They spent a lot of time shooting balls and measuring the spread.

ranlevinstein 14-06-2016 09:39

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1563453)
Model based control is required :) Once you get the hang of it, I find it to let us do cooler stuff than non-model based controls. We plot things and try to figure out which terms have errors in them to help debug it.

The states are:
[shoulder position; shoulder velocity; shooter position (relative to the base), shooter velocity (relative to the base), shoulder voltage error, shooter voltage error]

The shooter is connected to the superstructure, but there is a coordinate transformation to have the states be relative to the ground. This gives us better control over what we actually care about.

The voltage errors are what we use instead of integral control. This lets the kalman filter learn the difference between what the motor is being asked to do and what actually is achieved, and lets us compensate for it. If you work the math out volts -> force.

First of all your robot is truly amazing!

I have a few questions about your control.

1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands.

2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form?

3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error?

4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer?

5.How did you tune the Q and R matrices in the kalman filter?

6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors?

7.How did you create 2 dimensional trajectories for your robot during auto?

8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense?

Thank you very much! :)

AustinSchuh 15-06-2016 01:54

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by ranlevinstein (Post 1592782)
First of all your robot is truly amazing!

I have a few questions about your control.

1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands.

2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form?

You nailed it.

Delta-U won't work if you command the motors with U, since your model doesn't match your plant (off by an integral).

I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand.

If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".

d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u

You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].

We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.

Quote:

Originally Posted by ranlevinstein (Post 1592782)
3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error?

Yes. It's just another way to add integral into the mix. I like it because if your model is performing correctly, you won't get any integral windup. The trick is that it lets the applied voltage diverge from the voltage that the robot appears to be moving with by observing it in the observer.

Quote:

Originally Posted by ranlevinstein (Post 1592782)
4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer?

It's just another way to tune a state space observer. If you check the math, if you assume fixed gains, the kalman gain converges to a fixed number as time evolves. You can solve for that kalman gain and use it all the time. Which results in the update step you find in an observer.

Honestly, I end up tuning it one way and then looking at the poles directly at the end to see how the tuning affected the results.

Quote:

Originally Posted by ranlevinstein (Post 1592782)
5.How did you tune the Q and R matrices in the kalman filter?

The rule of thumb I've been using is to set the diagonal terms to the square of a reasonable error quantity for that term (for Q), and try to guess how much model uncertainty there is. I also like to look at the resulting kalman gain to see how crazy it is, and then also plot the input vs the output of the filter and look at how well it performs during robot moves. I've found that if I look at things from enough angles, I get a better picture of what's going on.

Quote:

Originally Posted by ranlevinstein (Post 1592782)
6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors?

I didn't. I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.

The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))

This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there.

The end result was that our model defined the feed-forwards constants, so it was free :) We also were able to gain schedule the feed-forwards terms for free as well.

FYI, this was the first year that we did feed-forwards. Before, we just relied on the controllers compensating. You can see it in some of the moves in the 2015 robot where it'll try to do a horizontal move, but end up with a steady state offset while moving due to the lack of feed-forwards.

Quote:

Originally Posted by ranlevinstein (Post 1592782)
7.How did you create 2 dimensional trajectories for your robot during auto?

We cheated. We had a rotational trapezoidal motion profile and a linear trapezoidal motion profile. We just started them at different times/positions, added them to each other, and let them overlay on top of each-other. It was a pain to tune, but worked well enough. We are going to try to implement http://arl.cs.utah.edu/pubs/ACC2014.pdf this summer.

Quote:

Originally Posted by ranlevinstein (Post 1592782)
8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense?

Thank you very much! :)

Our auto code was a lot of "kick off A, wait until condition, kick off B, wait until condition, kick off C, ..." So, we'd start a motion profile in the drive, wait until we had moved X far, and then start the motion profile for the arm. The controllers would calculate the profiles as they went, so all Auto actually did was coordinate when to ask what subsystem to go where. With enough motion profiles, and when you make sure they aren't saturated, you end up with a pretty deterministic result.

Awesome questions, keep them coming! I love this stuff.

ranlevinstein 15-06-2016 10:21

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1592867)
I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand.

If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".

d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u

You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].

We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.

Thank you for your fast reply!

Are the A and B matrices here the same as in this pdf?
https://www.chiefdelphi.com/forums/a...1&d=1419983380

Quote:

Originally Posted by AustinSchuh (Post 1592867)
Yes. It's just another way to add integral into the mix. I like it because if your model is performing correctly, you won't get any integral windup. The trick is that it lets the applied voltage diverge from the voltage that the robot appears to be moving with by observing it in the observer.

I am a bit confused here. I integrated both sides of the equation and got:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error

Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing?

Quote:

Originally Posted by AustinSchuh (Post 1592867)
I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.

The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))

This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there.

The end result was that our model defined the feed-forwards constants, so it was free We also were able to gain schedule the feed-forwards terms for free as well.

WOW!
This is really smart!
I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct?
Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward?

Can you explain how did you observe the voltage?

Quote:

Originally Posted by AustinSchuh (Post 1592867)
We are going to try to implement http://arl.cs.utah.edu/pubs/ACC2014.pdf this summer.

This looks very interesting, why did you choose this way instead of all the other available methods?
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it.
Who teaches your students all this stuff?
My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice?

Thank you very much!

AustinSchuh 16-06-2016 02:32

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by ranlevinstein (Post 1592898)
Thank you for your fast reply!

Are the A and B matrices here the same as in this pdf?
https://www.chiefdelphi.com/forums/a...1&d=1419983380

For this subsystem, yes. More generally, they may diverge, but that's a very good place to start.

Quote:

Originally Posted by ranlevinstein (Post 1592898)
I am a bit confused here. I integrated both sides of the equation and got:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error

Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing?

It's a little bit more tricky to reason about the controller that way than you think. The voltage error term is really the error between what you are telling the controller it should be commanding, and what it thinks it is commanding. If you feed in a 0 (the steady state value when the system should be stopped, this should change if you have a profile), it will be the difference between the estimated plant u, and 0. This will try to drive the estimated plant u to 0 by commanding voltage. u will also have position and derivative terms. Those terms will decay back to 0 some amount every cycle due to the third term. This lets them act more like traditional PD terms, since they can't integrate forever.

The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that.

You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier.

Quote:

Originally Posted by ranlevinstein (Post 1592898)
WOW!
This is really smart!
I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct?
Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward?

Bingo. We didn't get the equation done perfectly, so sometimes Kff isn't perfect. It helps to simulate it to make sure it performs perfectly before trying it on a bot.

That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible.

Quote:

Originally Posted by ranlevinstein (Post 1592898)
Can you explain how did you observe the voltage?

You can mathematically prove that the observer can observe the voltage as long as you tune it correctly. This is called observability, and can be calculated from some matrix products given A and C. For most controls people, that is enough of an explanation ;)

Intuitively, you can think of the observer estimating where the next sensor reading should be, measuring what it got, and then attributing the error to some amount of error in each state. So, if the position is always reading higher than expected, it will slowly squeeze the error into the voltage error term, where it will finally influence the model to not always read high anymore. You'll then have a pretty good estimate of the voltage required to do what you are currently doing.

Quote:

Originally Posted by ranlevinstein (Post 1592898)
This looks very interesting, why did you choose this way instead of all the other available methods?
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it.
Who teaches your students all this stuff?
My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice?

Thank you very much!

The tricky part of the math is that robots can't move sideways. This type of system is known as a non-holonomic system. It is an open research topic to do good non-holonomic control since it is nonlinear. This paper was recommended to me by Jared Russell, and the results in the results section is actually really good. This generates provably stable paths that are feasible. A-Star and all the graph based path planning algorithms struggle to generate feasible paths.

We have a small number of students who get really involved in the controls on 971. Some years are better than others, but that's how this type of thing goes. There is a lot of software on a robot that isn't controls. I'm going to see if the paper actually gets good results, and then work to involve students to see if we can fix some of the shortcomings with the model that one of the examples in the paper uses and help make improvements. I think that'll let me simplify the concept somewhat for them and get them playing around with the algorithm. I've yet to try to involve students in something this mathematically challenging, so I'll know more once I've pulled it off... I mostly mention the paper as something fun that you can do with controls in this context.

When I get the proper amount of interest and commitment, I sit down with the student and spend a significant amount of time teaching them how and why state space controllers work. I like to do it by rederiving some of the math to help demystify it, and having them work through examples. I've had students take that knowledge pretty far and do some pretty cool things with it. Teaching someone something this tricky is a lot of work. We tend to have about 1 student every year actually take the time to succeed. Sometimes more, sometimes less.

Doing model based controls without good help can be tricky. I honestly recommend most of the time to focus on writing test cases with simulations with more simple controllers (PID, for example) before you then start looking at model based controls. This gets you ready for what you need to do for more complicated controllers, and if you were to stop there having learned dependency injection and testing, that would already be an enormous success. :) The issue is that most of this stuff is upper division college level material, and is sometimes graduate level material. Take a subsystem on your robot, and try to write a model based controller for it over the off-season.

ranlevinstein 16-06-2016 04:58

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1593037)
If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".

d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u

You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].

We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.

I modeled it as you said and I got that:
acceleration = a * velocity + b * (voltage error) + b * u, where a and b are constants.
I am a bit confused about why this is true because the voltage error is in volts and u is volts/seconds so you are adding numbers with different units.

Quote:

Originally Posted by AustinSchuh (Post 1593039)
It's a little bit more tricky to reason about the controller that way than you think. The voltage error term is really the error between what you are telling the controller it should be commanding, and what it thinks it is commanding. If you feed in a 0 (the steady state value when the system should be stopped, this should change if you have a profile), it will be the difference between the estimated plant u, and 0. This will try to drive the estimated plant u to 0 by commanding voltage. u will also have position and derivative terms. Those terms will decay back to 0 some amount every cycle due to the third term. This lets them act more like traditional PD terms, since they can't integrate forever.

The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that.

You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier.

I am still having some problems with understanding it, if the system is behaving just like it should then the integral of the voltage error will be zero and then there is just a PI controller. In my mind it makes a lot more sense to have:
u = constant * position error + constant * velocity error + constant * integral of voltage error
Maybe there is a problem with velocity error part here but I still don't understand how there won't be an integral windup when you have integral of position error in your controller.
What am I missing?

Also I saw you are using moment of inertia of what being spun in your model, What units is it and how can I find it?

Quote:

Originally Posted by AustinSchuh (Post 1593039)
Bingo. We didn't get the equation done perfectly, so sometimes Kff isn't perfect. It helps to simulate it to make sure it performs perfectly before trying it on a bot.

That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible.

I am having some problems with taking the derivative of the expression when I am leaving all the matrices as parameters. How did you do it? Did you get a parametric solution?

I was wondering about how the delta-u controller works when the u command get's higher than 12 volts, because then you can't control the rate of change of the voltage anymore.

Thank you so much! Your answers helped my team and me a lot!:)

Mike Schreiber 16-06-2016 11:26

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1586746)
For the intake, we've gotten really good at timing belt reductions, and the single reduction from there would have been required anyways since we needed to power the gearbox from the middle of the shaft anyways. The VP wouldn't have actually made it much simpler.


.....


We've been running timing belt reductions as the first stage since 2013, and have really liked it. They are much quieter, and we don't see wear.

Is there anything special you're doing that I'm not seeing? Looks like math center-to-center distances - no tensioning. Does this reduce lash significantly compared to spur gears in the first stage? Aside from over-sized hex, what else are you doing to remove lash from the system?

Awesome robot - as always.

ranlevinstein 16-06-2016 15:33

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1592867)
I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.

The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))

I managed to solve for u assuming Q is symmetric and the trajectory is feasible. I got:
u = (B^T *Q*B)^-1 * (r(n+1)^T - r(n)^T * A^T)*Q*B
Is that correct?

Travis Schuh 16-06-2016 22:15

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by Mike Schreiber (Post 1593072)
Is there anything special you're doing that I'm not seeing? Looks like math center-to-center distances - no tensioning. Does this reduce lash significantly compared to spur gears in the first stage? Aside from over-sized hex, what else are you doing to remove lash from the system?

Awesome robot - as always.

We don't do any tensioning on our first stage belts, it is what you have described. I don't think there are huge backlash savings based on having a belt vs gear drive on that stage, because the backlash at that stage is greatly reduced as you go through the reduction and the tooth to tooth backlash is minimal. There is also the added benefit of belts being quieter than gears at these speeds, but that is more of a nice to have.

Most of our backlash reduction comes from eliminating hex backlash. We also do the standard run as large a chain reduction as you can on the last stage, and then keep this chain tensioned well. Going forward we are going to be using #35 whenever we can for these reductions to avoid stiffness issues, which also helps with the controls.

AustinSchuh 16-06-2016 23:57

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by ranlevinstein (Post 1593103)
I managed to solve for u assuming Q is symmetric and the trajectory is feasible. I got:
u = (B^T *Q*B)^-1 * (r(n+1)^T - r(n)^T * A^T)*Q*B
Is that correct?

Code:

def TwoStateFeedForwards(B, Q):
  """Computes the feed forwards constant for a 2 state controller.

  This will take the form U = Kff * (R(n + 1) - A * R(n)), where Kff is the
  feed-forwards constant.  It is important that Kff is *only* computed off
  the goal and not the feed back terms.

  Args:
    B: numpy.Matrix[num_states, num_inputs] The B matrix.
    Q: numpy.Matrix[num_states, num_states] The Q (cost) matrix.

  Returns:
    numpy.Matrix[num_inputs, num_states]
  """

  # We want to find the optimal U such that we minimize the tracking cost.
  # This means that we want to minimize
  #  (B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
  # TODO(austin): This doesn't take into account the cost of U

  return numpy.linalg.inv(B.T * Q * B) * B.T * Q.T

:)

kylestach1678 17-06-2016 03:48

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by AustinSchuh (Post 1592867)
The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))

This cost function is just the state error part of LQR, correct?
Quote:

Originally Posted by AustinSchuh (Post 1593165)
Code:

  return numpy.linalg.inv(B.T * Q * B) * B.T * Q.T

I noticed that this solution ends up evaluating to the (pseudo)inverse of B when Q is a constant multiple of the identity matrix, which is the solution to R(n+1)=A*R(n)+B*u when u=Kff*(R(n+1)-A*R(n)). What is the reasoning behind using the LQR weighted solution instead of the simpler version?

thatprogrammer 17-06-2016 20:17

Re: FRC971 Spartan Robotics 2016 Release Video
 
You set your wheel velocity to 640 in your shooter code. I can't figure out what unit of measure this 640 is calculated in; RPM would be too slow while RPS would be too fast. What unit do you use, and is it related to your model based calculation of everything? (Do you have any tips for starting to learn how to run everything using models?)

kylestach1678 17-06-2016 20:49

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by thatprogrammer (Post 1593240)
You set your wheel velocity to 640 in your shooter code. I can't figure out what unit of measure this 640 is calculated in; RPM would be too slow while RPS would be too fast. What unit do you use, and is it related to your model based calculation of everything? (Do you have any tips for starting to learn how to run everything using models?)

Radians per second, I would assume. Everything in standard units :D. Using radians makes the derivation of the models simpler, especially when the rotation eventually gets transformed into linear motion, and it is nice to be consistent across the board.

AustinSchuh 27-06-2016 22:56

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by ranlevinstein (Post 1593043)
I modeled it as you said and I got that:
acceleration = a * velocity + b * (voltage error) + b * u, where a and b are constants.
I am a bit confused about why this is true because the voltage error is in volts and u is volts/seconds so you are adding numbers with different units.

Nice catch. Maybe I wasn't clear, but U changed units from volts/sec to volts, and the integrator on the output of the plant disappeared.

Quote:

Originally Posted by ranlevinstein (Post 1593043)
I am still having some problems with understanding it, if the system is behaving just like it should then the integral of the voltage error will be zero and then there is just a PI controller. In my mind it makes a lot more sense to have:
u = constant * position error + constant * velocity error + constant * integral of voltage error
Maybe there is a problem with velocity error part here but I still don't understand how there won't be an integral windup when you have integral of position error in your controller.
What am I missing?

I *think* you are off by an integrator again.

u = Kp * x + Kv * v + voltage_error

So, if torque_error = 0 (the model is behaving as expected), then you don't add anything. Ask again if I read too fast.

Quote:

Originally Posted by ranlevinstein (Post 1593043)
Also I saw you are using moment of inertia of what being spun in your model, What units is it and how can I find it?

kg * m^2

Quote:

Originally Posted by ranlevinstein (Post 1593043)
I was wondering about how the delta-u controller works when the u command get's higher than 12 volts, because then you can't control the rate of change of the voltage anymore.

Thank you so much! Your answers helped my team and me a lot!:)

You have the same problem with a normal controller. The nonlinear assumption breaks down there too. We just cap the accumulator to +- 12.

To solve that all correctly, you'll want to use a Model Predictive Controller. They are able to actually take into account saturation correctly. Unfortunately, they aren't easy to work with. We haven't deployed one yet to our robot. (Go read up on them a bit. They are super cool :) That was either one of, or my favorite class at college.)

It's been another busy week. Sorry for taking so long. I started a reply a week ago and couldn't find time to finish it.

AustinSchuh 27-06-2016 23:00

Re: FRC971 Spartan Robotics 2016 Release Video
 
Quote:

Originally Posted by kylestach1678 (Post 1593174)
This cost function is just the state error part of LQR, correct?

Yes. Since it doesn't have the U part of the LQR controller, it isn't provably stable, and I've seen ones which aren't...

Quote:

Originally Posted by kylestach1678 (Post 1593174)
I noticed that this solution ends up evaluating to the (pseudo)inverse of B when Q is a constant multiple of the identity matrix, which is the solution to R(n+1)=A*R(n)+B*u when u=Kff*(R(n+1)-A*R(n)). What is the reasoning behind using the LQR weighted solution instead of the simpler version?

Good catch. It worked, so we stopped? ;) I'd like to revisit that math this summer. It doesn't always return an answer which converges, which suggests to me that something is wrong with it.


All times are GMT -5. The time now is 08:35.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi