Go to Post If you do mentor FIRST, be an asset to your team, but know that your students will admire you all the more if you set an example and finish your own homework. - Eugenia Gabrielov [more]
Home
Go Back   Chief Delphi > FIRST > Robot Showcase
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #106   Spotlight this post!  
Unread 03-06-2016, 00:47
AustinSchuh AustinSchuh is offline
Registered User
FRC #0971 (Spartan Robotics) #254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Feb 2005
Rookie Year: 1999
Location: Los Altos, CA
Posts: 803
AustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond repute
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by Gregor View Post
Your last 3 robots have been extremely ambitious, unique, and well done. Why are your robots so different, what are you doing that's different than other teams? During brainstorming when someone proposes something like your 2015 robot everyone would throw it out immediately. How do encourage such unique thinking, and when you move into the design process how do you know you'll be able to pull off your crazy robots?
That's a hard one to answer, kind of like trying to provide a recipe for innovation...

One of the things that makes 971 unique is that we are willing and able to move significant mechanical complexity into software. 2014 and 2015 were good examples of this. 2016 wasn't as crazy from a software point of view, but we also have gotten good at making robots work. We have strong software mentors and students with experience in controls and automation along with strong mechanical mentors and students to build designs good enough to be accurately controlled. The mechanical bits make the software easy (or at least possible).

We tend to start with a list of requirements for the robot, quickly decide what we know is going to work from previous years (and actually start designing it in parallel), and then sketch out concept sketches of the rest. We do a lot of big picture design with blocky mechanisms and sketches to work out geometry and the big questions. We then start sorting out all the big unknowns in that design, and working our way from there to a final solution. I think that a lot of what makes us different is that we impose interesting constraints on our robots, and then work really hard to satisfy them. We come up with most of our most out of the box ideas while trying to figure out how to make the concept sketch have the CG, range of motion, tuck positions, and speeds that we want before we get to the detailed design.

In 2014, we wanted to build something inspired by 1114 in 2008. We put requirements on that to be able to quickly intake the ball and also be able to drive around with the claw inside our frame perimeter ready to grab the ball. We tend to get to the requirements pretty quickly, and then start figuring out the best way to achieve them. After a bunch of challenging packaging questions, someone proposed that rather than using a piston to open and close the claw, we could just individually drive the two claws with motors. That brakethrough let us make our packaging requirements work much easier, and ended up being a pretty unique design. That got us comfortable building robots with more and more software.

In 2015, we were pretty confident that we didn't need to cut into the frame perimeter. Unfortunately, by the time we had determined that that requirement was hurting us, we had already shipped the drive base. We spent a lot of time working through various alternatives to figure out how to stack on top of the drive base and then place the stack. In the end, the only way we could make it work was what you saw. We knew that we were very close on weight, and again used software to allow us to remove the mechanical coupling between the left and right sides to reduce weight. We were confident from our 2014 experience that we could make that work. I like to tell people that our 2015 robot was very good execution of the wrong strategy... We wouldn't build it again, so maybe you guys are all smarter than us there

2016 fell together much easier than I expected. It had more iteration than we've ever done before (lots of V2 on mechanisms), which helps it look polished. Honestly, most of the hard work in 2016 was in the implementation, not the concept. We wanted a shooter that shot from high up, and the way to do that was to put it on an arm.

We are getting to the point where we have a lot of knowledge built up around what fails in these robots and what to pay attention to. That part has just taken a lot of time and a lot of hard work. We don't spend much time debating where to do low backlash gearboxes, or figuring out how to control or sense various joints. Sometimes, I think we design the robots we design because we over-think problems and then come up with solutions to them. We work through a lot of math for gearbox calculations, power usage, etc, and do some basic simulations on some of the more critical subsystems. We also do a gut check to make sure that we think the subsystems will work when we build them, and we have good enough prototypes to prove out anything we are uncertain about.
Reply With Quote
  #107   Spotlight this post!  
Unread 07-06-2016, 19:17
AirplaneWins AirplaneWins is offline
Registered User
FRC #2848
 
Join Date: Apr 2015
Location: Dallas
Posts: 11
AirplaneWins is an unknown quantity at this point
Re: FRC971 Spartan Robotics 2016 Release Video

Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
Reply With Quote
  #108   Spotlight this post!  
Unread 09-06-2016, 13:47
Schroedes23's Avatar
Schroedes23 Schroedes23 is offline
Registered User
AKA: Noah Schroeder
FRC #1625 (Winnovation)
Team Role: Alumni
 
Join Date: Oct 2013
Rookie Year: 2012
Location: Winnebago
Posts: 6
Schroedes23 is an unknown quantity at this point
Re: FRC971 Spartan Robotics 2016 Release Video

Can you reveal the secret of how you dealt with the changing compression of the boulders?
Reply With Quote
  #109   Spotlight this post!  
Unread 10-06-2016, 01:16
Travis Schuh Travis Schuh is offline
Registered User
FRC #0971 (Spartan Robotics)
Team Role: Engineer
 
Join Date: Dec 2006
Rookie Year: 1999
Location: Los Altos, CA
Posts: 123
Travis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant future
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by Schroedes23 View Post
Can you reveal the secret of how you dealt with the changing compression of the boulders?
We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.
Reply With Quote
  #110   Spotlight this post!  
Unread 10-06-2016, 01:24
Travis Schuh Travis Schuh is offline
Registered User
FRC #0971 (Spartan Robotics)
Team Role: Engineer
 
Join Date: Dec 2006
Rookie Year: 1999
Location: Los Altos, CA
Posts: 123
Travis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant future
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AirplaneWins View Post
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
I can't quite speak for our vision team on all of the implementation, but I can fill in some high level details.

We do have two cameras. There was an early thought to use them to do stereo (do separate target recognition in both cameras, and get depth from the offset distance), and we had a bench prototype of this that had good preliminary results. We ended up not needing accurate depth info, so the two cameras were just used for finding the center of the goal. We could have done that with one camera mounted centered, but that is easier said than done.

We were using the jetson for vision processing and were happy with its performance.
Reply With Quote
  #111   Spotlight this post!  
Unread 12-06-2016, 01:32
AustinSchuh AustinSchuh is offline
Registered User
FRC #0971 (Spartan Robotics) #254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Feb 2005
Rookie Year: 1999
Location: Los Altos, CA
Posts: 803
AustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond repute
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AirplaneWins View Post
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
Sorry for the delayed response. Life got in the way of robots again

As Travis said, we wanted to do stereo, but didn't get around to verifying that it worked well enough to start using the distance that it reported. One of the side effects of stereo cameras was that we didn't need to deal with the transforms required to deal with the camera not being centered. Our shooter didn't have any space above or below the ball for a camera. The bottom of the shooter rested on the bellypan, and the top just cleared the low bar.

We did the shape detection on the Jetson TK1, and passed back a list of U shapes found to the roboRIO over UDP in a protobuf, including the coordinates of the 4 corners for each camera. We didn't find that we needed to do color thresholding, just intensity thresholding, and then shape detection. This ran at 20 hz, 1280x1024 (I think), all on the CPU. The roboRIO then matched up the targets based on the angle of the bottom of the U.

We were very careful to record the timestamps through the system. We recorded the timestamp that v4l2 reported that the image was received by the kernel, the timestamp at which it was received by userspace on the Jetson, the timestamp it was sent to the roboRIO and the timestamp that the processed image was received on the roboRIO. The let us back out the projected time that the image was captured on the Jetson in the roboRIO clock within a couple ms. We then saved all the gyro headings over the last second and the times at which they were measured, and used those two pieces of data to interpolate the heading when the image was taken, and therefore the current heading of the target. This, along with our well tuned drivetrain control loops, let us stabilize to the target very quickly.

Ask any follow-on questions that you need.
Reply With Quote
  #112   Spotlight this post!  
Unread 12-06-2016, 01:34
AustinSchuh AustinSchuh is offline
Registered User
FRC #0971 (Spartan Robotics) #254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Feb 2005
Rookie Year: 1999
Location: Los Altos, CA
Posts: 803
AustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond repute
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by Travis Schuh View Post
We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.
This was also helped by our prototyping team spending significant time figuring out which compression seemed to have the least shot variation. They spent a lot of time shooting balls and measuring the spread.
Reply With Quote
  #113   Spotlight this post!  
Unread 14-06-2016, 09:39
ranlevinstein's Avatar
ranlevinstein ranlevinstein is offline
Registered User
FRC #2230 (General Angels)
Team Role: Programmer
 
Join Date: Oct 2015
Rookie Year: 2014
Location: Israel
Posts: 9
ranlevinstein will become famous soon enough
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AustinSchuh View Post
Model based control is required Once you get the hang of it, I find it to let us do cooler stuff than non-model based controls. We plot things and try to figure out which terms have errors in them to help debug it.

The states are:
[shoulder position; shoulder velocity; shooter position (relative to the base), shooter velocity (relative to the base), shoulder voltage error, shooter voltage error]

The shooter is connected to the superstructure, but there is a coordinate transformation to have the states be relative to the ground. This gives us better control over what we actually care about.

The voltage errors are what we use instead of integral control. This lets the kalman filter learn the difference between what the motor is being asked to do and what actually is achieved, and lets us compensate for it. If you work the math out volts -> force.
First of all your robot is truly amazing!

I have a few questions about your control.

1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands.

2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form?

3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error?

4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer?

5.How did you tune the Q and R matrices in the kalman filter?

6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors?

7.How did you create 2 dimensional trajectories for your robot during auto?

8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense?

Thank you very much!
Reply With Quote
  #114   Spotlight this post!  
Unread 15-06-2016, 01:54
AustinSchuh AustinSchuh is offline
Registered User
FRC #0971 (Spartan Robotics) #254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Feb 2005
Rookie Year: 1999
Location: Los Altos, CA
Posts: 803
AustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond repute
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by ranlevinstein View Post
First of all your robot is truly amazing!

I have a few questions about your control.

1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands.

2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form?
You nailed it.

Delta-U won't work if you command the motors with U, since your model doesn't match your plant (off by an integral).

I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand.

If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".

d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u

You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].

We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.

Quote:
Originally Posted by ranlevinstein View Post
3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error?
Yes. It's just another way to add integral into the mix. I like it because if your model is performing correctly, you won't get any integral windup. The trick is that it lets the applied voltage diverge from the voltage that the robot appears to be moving with by observing it in the observer.

Quote:
Originally Posted by ranlevinstein View Post
4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer?
It's just another way to tune a state space observer. If you check the math, if you assume fixed gains, the kalman gain converges to a fixed number as time evolves. You can solve for that kalman gain and use it all the time. Which results in the update step you find in an observer.

Honestly, I end up tuning it one way and then looking at the poles directly at the end to see how the tuning affected the results.

Quote:
Originally Posted by ranlevinstein View Post
5.How did you tune the Q and R matrices in the kalman filter?
The rule of thumb I've been using is to set the diagonal terms to the square of a reasonable error quantity for that term (for Q), and try to guess how much model uncertainty there is. I also like to look at the resulting kalman gain to see how crazy it is, and then also plot the input vs the output of the filter and look at how well it performs during robot moves. I've found that if I look at things from enough angles, I get a better picture of what's going on.

Quote:
Originally Posted by ranlevinstein View Post
6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors?
I didn't. I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.

The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))

This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there.

The end result was that our model defined the feed-forwards constants, so it was free We also were able to gain schedule the feed-forwards terms for free as well.

FYI, this was the first year that we did feed-forwards. Before, we just relied on the controllers compensating. You can see it in some of the moves in the 2015 robot where it'll try to do a horizontal move, but end up with a steady state offset while moving due to the lack of feed-forwards.

Quote:
Originally Posted by ranlevinstein View Post
7.How did you create 2 dimensional trajectories for your robot during auto?
We cheated. We had a rotational trapezoidal motion profile and a linear trapezoidal motion profile. We just started them at different times/positions, added them to each other, and let them overlay on top of each-other. It was a pain to tune, but worked well enough. We are going to try to implement http://arl.cs.utah.edu/pubs/ACC2014.pdf this summer.

Quote:
Originally Posted by ranlevinstein View Post
8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense?

Thank you very much!
Our auto code was a lot of "kick off A, wait until condition, kick off B, wait until condition, kick off C, ..." So, we'd start a motion profile in the drive, wait until we had moved X far, and then start the motion profile for the arm. The controllers would calculate the profiles as they went, so all Auto actually did was coordinate when to ask what subsystem to go where. With enough motion profiles, and when you make sure they aren't saturated, you end up with a pretty deterministic result.

Awesome questions, keep them coming! I love this stuff.
Reply With Quote
  #115   Spotlight this post!  
Unread 15-06-2016, 10:21
ranlevinstein's Avatar
ranlevinstein ranlevinstein is offline
Registered User
FRC #2230 (General Angels)
Team Role: Programmer
 
Join Date: Oct 2015
Rookie Year: 2014
Location: Israel
Posts: 9
ranlevinstein will become famous soon enough
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AustinSchuh View Post
I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand.

If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".

d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u

You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].

We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.
Thank you for your fast reply!

Are the A and B matrices here the same as in this pdf?
https://www.chiefdelphi.com/forums/a...1&d=1419983380

Quote:
Originally Posted by AustinSchuh View Post
Yes. It's just another way to add integral into the mix. I like it because if your model is performing correctly, you won't get any integral windup. The trick is that it lets the applied voltage diverge from the voltage that the robot appears to be moving with by observing it in the observer.
I am a bit confused here. I integrated both sides of the equation and got:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error

Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing?

Quote:
Originally Posted by AustinSchuh View Post
I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.

The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))

This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there.

The end result was that our model defined the feed-forwards constants, so it was free We also were able to gain schedule the feed-forwards terms for free as well.
WOW!
This is really smart!
I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct?
Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward?

Can you explain how did you observe the voltage?

Quote:
Originally Posted by AustinSchuh View Post
We are going to try to implement http://arl.cs.utah.edu/pubs/ACC2014.pdf this summer.
This looks very interesting, why did you choose this way instead of all the other available methods?
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it.
Who teaches your students all this stuff?
My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice?

Thank you very much!
Reply With Quote
  #116   Spotlight this post!  
Unread 16-06-2016, 02:32
AustinSchuh AustinSchuh is offline
Registered User
FRC #0971 (Spartan Robotics) #254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Feb 2005
Rookie Year: 1999
Location: Los Altos, CA
Posts: 803
AustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond repute
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by ranlevinstein View Post
Thank you for your fast reply!

Are the A and B matrices here the same as in this pdf?
https://www.chiefdelphi.com/forums/a...1&d=1419983380
For this subsystem, yes. More generally, they may diverge, but that's a very good place to start.

Quote:
Originally Posted by ranlevinstein View Post
I am a bit confused here. I integrated both sides of the equation and got:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error

Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing?
It's a little bit more tricky to reason about the controller that way than you think. The voltage error term is really the error between what you are telling the controller it should be commanding, and what it thinks it is commanding. If you feed in a 0 (the steady state value when the system should be stopped, this should change if you have a profile), it will be the difference between the estimated plant u, and 0. This will try to drive the estimated plant u to 0 by commanding voltage. u will also have position and derivative terms. Those terms will decay back to 0 some amount every cycle due to the third term. This lets them act more like traditional PD terms, since they can't integrate forever.

The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that.

You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier.

Quote:
Originally Posted by ranlevinstein View Post
WOW!
This is really smart!
I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct?
Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward?
Bingo. We didn't get the equation done perfectly, so sometimes Kff isn't perfect. It helps to simulate it to make sure it performs perfectly before trying it on a bot.

That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible.

Quote:
Originally Posted by ranlevinstein View Post
Can you explain how did you observe the voltage?
You can mathematically prove that the observer can observe the voltage as long as you tune it correctly. This is called observability, and can be calculated from some matrix products given A and C. For most controls people, that is enough of an explanation

Intuitively, you can think of the observer estimating where the next sensor reading should be, measuring what it got, and then attributing the error to some amount of error in each state. So, if the position is always reading higher than expected, it will slowly squeeze the error into the voltage error term, where it will finally influence the model to not always read high anymore. You'll then have a pretty good estimate of the voltage required to do what you are currently doing.

Quote:
Originally Posted by ranlevinstein View Post
This looks very interesting, why did you choose this way instead of all the other available methods?
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it.
Who teaches your students all this stuff?
My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice?

Thank you very much!
The tricky part of the math is that robots can't move sideways. This type of system is known as a non-holonomic system. It is an open research topic to do good non-holonomic control since it is nonlinear. This paper was recommended to me by Jared Russell, and the results in the results section is actually really good. This generates provably stable paths that are feasible. A-Star and all the graph based path planning algorithms struggle to generate feasible paths.

We have a small number of students who get really involved in the controls on 971. Some years are better than others, but that's how this type of thing goes. There is a lot of software on a robot that isn't controls. I'm going to see if the paper actually gets good results, and then work to involve students to see if we can fix some of the shortcomings with the model that one of the examples in the paper uses and help make improvements. I think that'll let me simplify the concept somewhat for them and get them playing around with the algorithm. I've yet to try to involve students in something this mathematically challenging, so I'll know more once I've pulled it off... I mostly mention the paper as something fun that you can do with controls in this context.

When I get the proper amount of interest and commitment, I sit down with the student and spend a significant amount of time teaching them how and why state space controllers work. I like to do it by rederiving some of the math to help demystify it, and having them work through examples. I've had students take that knowledge pretty far and do some pretty cool things with it. Teaching someone something this tricky is a lot of work. We tend to have about 1 student every year actually take the time to succeed. Sometimes more, sometimes less.

Doing model based controls without good help can be tricky. I honestly recommend most of the time to focus on writing test cases with simulations with more simple controllers (PID, for example) before you then start looking at model based controls. This gets you ready for what you need to do for more complicated controllers, and if you were to stop there having learned dependency injection and testing, that would already be an enormous success. The issue is that most of this stuff is upper division college level material, and is sometimes graduate level material. Take a subsystem on your robot, and try to write a model based controller for it over the off-season.
Reply With Quote
  #117   Spotlight this post!  
Unread 16-06-2016, 04:58
ranlevinstein's Avatar
ranlevinstein ranlevinstein is offline
Registered User
FRC #2230 (General Angels)
Team Role: Programmer
 
Join Date: Oct 2015
Rookie Year: 2014
Location: Israel
Posts: 9
ranlevinstein will become famous soon enough
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AustinSchuh View Post
If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".

d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u

You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].

We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.
I modeled it as you said and I got that:
acceleration = a * velocity + b * (voltage error) + b * u, where a and b are constants.
I am a bit confused about why this is true because the voltage error is in volts and u is volts/seconds so you are adding numbers with different units.

Quote:
Originally Posted by AustinSchuh View Post
It's a little bit more tricky to reason about the controller that way than you think. The voltage error term is really the error between what you are telling the controller it should be commanding, and what it thinks it is commanding. If you feed in a 0 (the steady state value when the system should be stopped, this should change if you have a profile), it will be the difference between the estimated plant u, and 0. This will try to drive the estimated plant u to 0 by commanding voltage. u will also have position and derivative terms. Those terms will decay back to 0 some amount every cycle due to the third term. This lets them act more like traditional PD terms, since they can't integrate forever.

The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that.

You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier.
I am still having some problems with understanding it, if the system is behaving just like it should then the integral of the voltage error will be zero and then there is just a PI controller. In my mind it makes a lot more sense to have:
u = constant * position error + constant * velocity error + constant * integral of voltage error
Maybe there is a problem with velocity error part here but I still don't understand how there won't be an integral windup when you have integral of position error in your controller.
What am I missing?

Also I saw you are using moment of inertia of what being spun in your model, What units is it and how can I find it?

Quote:
Originally Posted by AustinSchuh View Post
Bingo. We didn't get the equation done perfectly, so sometimes Kff isn't perfect. It helps to simulate it to make sure it performs perfectly before trying it on a bot.

That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible.
I am having some problems with taking the derivative of the expression when I am leaving all the matrices as parameters. How did you do it? Did you get a parametric solution?

I was wondering about how the delta-u controller works when the u command get's higher than 12 volts, because then you can't control the rate of change of the voltage anymore.

Thank you so much! Your answers helped my team and me a lot!
Reply With Quote
  #118   Spotlight this post!  
Unread 16-06-2016, 11:26
Mike Schreiber's Avatar
Mike Schreiber Mike Schreiber is offline
Registered User
FRC #0067 (The HOT Team)
Team Role: Mentor
 
Join Date: Dec 2006
Rookie Year: 2006
Location: Milford, Michigan
Posts: 482
Mike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond reputeMike Schreiber has a reputation beyond repute
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AustinSchuh View Post
For the intake, we've gotten really good at timing belt reductions, and the single reduction from there would have been required anyways since we needed to power the gearbox from the middle of the shaft anyways. The VP wouldn't have actually made it much simpler.


.....


We've been running timing belt reductions as the first stage since 2013, and have really liked it. They are much quieter, and we don't see wear.
Is there anything special you're doing that I'm not seeing? Looks like math center-to-center distances - no tensioning. Does this reduce lash significantly compared to spur gears in the first stage? Aside from over-sized hex, what else are you doing to remove lash from the system?

Awesome robot - as always.
__________________
Mike Schreiber

Kettering University ('09-'13) University of Michigan ('14-'18?)
FLL ('01-'02), FRC Team 27 ('06-'09), Team 397 ('10), Team 3450/314 ('11), Team 67 ('14-'??)
Reply With Quote
  #119   Spotlight this post!  
Unread 16-06-2016, 15:33
ranlevinstein's Avatar
ranlevinstein ranlevinstein is offline
Registered User
FRC #2230 (General Angels)
Team Role: Programmer
 
Join Date: Oct 2015
Rookie Year: 2014
Location: Israel
Posts: 9
ranlevinstein will become famous soon enough
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by AustinSchuh View Post
I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.

The equation to minimize is:

(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
I managed to solve for u assuming Q is symmetric and the trajectory is feasible. I got:
u = (B^T *Q*B)^-1 * (r(n+1)^T - r(n)^T * A^T)*Q*B
Is that correct?

Last edited by ranlevinstein : 16-06-2016 at 16:16. Reason: Forgot to put transpose in the beginning of the expression.
Reply With Quote
  #120   Spotlight this post!  
Unread 16-06-2016, 22:15
Travis Schuh Travis Schuh is offline
Registered User
FRC #0971 (Spartan Robotics)
Team Role: Engineer
 
Join Date: Dec 2006
Rookie Year: 1999
Location: Los Altos, CA
Posts: 123
Travis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant futureTravis Schuh has a brilliant future
Re: FRC971 Spartan Robotics 2016 Release Video

Quote:
Originally Posted by Mike Schreiber View Post
Is there anything special you're doing that I'm not seeing? Looks like math center-to-center distances - no tensioning. Does this reduce lash significantly compared to spur gears in the first stage? Aside from over-sized hex, what else are you doing to remove lash from the system?

Awesome robot - as always.
We don't do any tensioning on our first stage belts, it is what you have described. I don't think there are huge backlash savings based on having a belt vs gear drive on that stage, because the backlash at that stage is greatly reduced as you go through the reduction and the tooth to tooth backlash is minimal. There is also the added benefit of belts being quieter than gears at these speeds, but that is more of a nice to have.

Most of our backlash reduction comes from eliminating hex backlash. We also do the standard run as large a chain reduction as you can on the last stage, and then keep this chain tensioned well. Going forward we are going to be using #35 whenever we can for these reductions to avoid stiffness issues, which also helps with the controls.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 03:54.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi