View Full Version : FRC971 Spartan Robotics 2016 Release Video
https://lh3.googleusercontent.com/-85wAm5lN2Y0/VvFsueyQBCI/AAAAAAAAeio/l0xaJQ_JGOUVERnMPdtEdJmny09UnWI5gCCo/s576-Ic42/robot%2Bposed%2B%25283%2529.jpg
Team 971 proudly presents our 2016 robot, which will be competing at the Sacramento and Silicon Valley regionals in California. Special thanks to everyone who made this robot possible!
And without further ado, here is a link to our release video: https://youtu.be/CMX4ynSQsyI
See you at the competition!
jajabinx124
22-03-2016, 22:07
I like the steering wheel! Did your driver just prefer to control the robot that way? Quite interesting.
Looks impressive as always 971. Good luck at your regional's and I hope to see you guys at champs.
Cory Walters
22-03-2016, 22:10
Wow! That shooter arm springing out is amazing!
BrendanB
22-03-2016, 22:14
I love this robot!
Those linkages did not disappoint you guys always have some of the coolest machines on the field!
For some reason, when this unfolds, I get somewhat of a "King Cobra" vibe from it.
Anupam Goli
22-03-2016, 22:21
This has to be one of the craziest robots I've seen. I really can't wait to see it in action.
For some reason, when this unfolds, I get somewhat of a "King Cobra" vibe from it.
That reminds me: what's this robot's name?
"King Cobra" would be great.:D
JohnFogarty
22-03-2016, 22:38
Is that ITX motherboard a camera processing computer? What's it running?
thatprogrammer
22-03-2016, 22:44
WOW... I am simply speechless!
Congrats to all of 971 for making such an AMAZING robot.
I like the steering wheel! Did your driver just prefer to control the robot that way? Quite interesting.
They have used a steering wheel since at least 2014.
Wow, watching that robot move is mesmerizing.
jajabinx124
22-03-2016, 22:57
They have used a steering wheel since at least 2014.
Ohh okay. I didn't know that. Thanks for answering.
Brandon Holley
22-03-2016, 23:01
Absolutely one of the most absurd robots I've ever seen - and I mean that in the absolute BEST way possible.
Way to go 971 - this thing is a work of art.
-Brando
pmangels17
22-03-2016, 23:07
971 is nothing if not elegant, and this year is no exception. Y'all make it look easy with designs this good.
If I may ask, does the shooter "wrist" rotate independently of the arm or are the mechanically linked?
waialua359
22-03-2016, 23:09
Absolutely one of the most absurd robots I've ever seen - and I mean that in the absolute BEST way possible.
Way to go 971 - this thing is a work of art.
-Brando
Just like last season. The way they unloaded the capped 6 stacks onto the scoring platform.
Always impressively different from the rest.
the.miler
22-03-2016, 23:19
Can we get a name for this monster? So much pivot.
If I may ask, does the shooter "wrist" rotate independently of the arm or are the mechanically linked?
The wrist and the arm rotate independently. They each have their own gearboxes which I believe you can see in the picture from the original post.
kevincrispie
22-03-2016, 23:32
I like the steering wheel! Did your driver just prefer to control the robot that way? Quite interesting.
Looks impressive as always 971. Good luck at your regional's and I hope to see you guys at champs.
Many of the team members are likely busy practicing for Davis, so I'll try and answer a couple other questions and some students or Austin and Travis can correct me where appropriate.
The team has been using the steering wheel setup since at least 2009. The team has found that our drivers like this setup and it provides excellent maneuverability and is very intuitive. Having a steering wheel with good software allows a driver to very easily make constant radius arced turns. Many drivers who use two joystick tank drive, especially beginners, tend to have a jerky and less efficient path to certain points around the field. It can be pretty easy to tell what system people use just by watching people drive. With practice, we think that the driver station setup gives us great handling on the robot and many advantages.
Great job Comran making a fantastic reveal/promo video!
Can we get a name for this monster? So much pivot.
The name is still a work in progress...
Chris Endres
22-03-2016, 23:35
Dang, truly another beautiful machine from 971.
s_forbes
22-03-2016, 23:35
You guys are absolutely insane! By far my favorite design this season. Just like last season... and the one before that...
The team has found that our drivers like this setup and it provides excellent maneuverability and is very intuitive.
I was operator last year for the team and I believe one of the biggest advantages was that it's very similar to driving a car so people who have experience with that should be able to pick up easily on the controls. Sort of ironic because the driver for the past few years ran everywhere instead of driving but was a great robot driver.
AustinSchuh
22-03-2016, 23:51
The team has been using the steering wheel setup since at least 2009.!
If I remember right, we started using a wheel in 2006, and haven't looked back since.
Our tentative name for the robot is Geoffrey. That may change, but it seems to be the consensus.
The board is a Jetson TK1.
The shooter wrist and shoulder joints are mechanically independent, but the software ends up more complicated that way. Motions in one now affect the other, which causes no end of headaches... It wouldn't be a proper 971 robot if we didn't fix it in software :)
We are looking forwards to getting to see everyone at the Sacramento Regional, Silicon Valley Regional, and hopefully champs!
Looks sick! It's really come along since I saw the frame sitting on your guy's table being put together. :)
aphelps231
23-03-2016, 02:46
Undoubtedly my new favorite robot. That thing is on a new level of cool.
May I ask what kind of mechanism you're using to push the ball into the shooter wheels?
AdamHeard
23-03-2016, 02:47
Amazing as always.
I count 5 control loops minimum to put a ball in the goal?
Jean Tenca
23-03-2016, 02:55
Wow that's an awesome design! One of the most unique robots this season.
Absolutely one of the most absurd robots I've ever seen - and I mean that in the absolute BEST way possible.
Way to go 971 - this thing is a work of art.
-Brando
Thought the same exact thing. This is a truly ridiculous robot and I love it.
Sperkowsky
23-03-2016, 07:22
Is that a radio mounted to your moving arm with electrical tape?
This is the most ingenious and inspiring design I've seen yet for this year's competition. Absolutely stunning. This is definitely a machine I'm gonna seek out for an up close look in St. Louis. Amazing job, guys!
Is that a radio mounted to your moving arm with electrical tape?
Haha, yes! (Gaffers tape though)
It was really the only place we found that was not covered by metal on 5/6 of the sides.
AustinSchuh
23-03-2016, 11:15
I count 5 control loops minimum to put a ball in the goal?
Depends on how you count them :P
The same loop runs on the left and right side of the shooter. They are only coupled in software. The control loop to run the shoulder and shooter angle is a 6 state MIMO (multiple input, multiple output) controller. The intake loop is separate. Then, there is the MIMO drivetrain loop being fed with angles by the camera.
So, yea, 5.
May I ask what kind of mechanism you're using to push the ball into the shooter wheels?
A pair of connected linkages (driven by miniature pistons).
Amit3339
23-03-2016, 11:34
Amazing robot as always!
a few question about you auto aiming- are you guys using two camera's to aim? can you explain how it works? how much(avg) time does it take to be focused on the target?
Thanks in advance!
Ok so I thought last years 971 robot was unique and yet very competitive... This year continues that trend... can't wait to see this up and running tomorrow at Sac/Davis!
They just make the most robotic robots. :p
Depends on how you count them :P
The same loop runs on the left and right side of the shooter. They are only coupled in software. The control loop to run the shoulder and shooter angle is a 6 state MIMO (multiple input, multiple output) controller. The intake loop is separate. Then, there is the MIMO drivetrain loop being fed with angles by the camera.
So, yea, 5.
A pair of connected linkages (driven by miniature pistons).
The first 4 states are obvious: main arm position and speed, shooter mini-arm position and speed. What are the other two? How'd you go about tuning that? With that level of complexity, I imagine you had to go to model-based control?
josesantos
23-03-2016, 18:07
The first 4 states are obvious: main arm position and speed, shooter mini-arm position and speed. What are the other two? How'd you go about tuning that? With that level of complexity, I imagine you had to go to model-based control?
I'd guess they're also controlling acceleration for both arms. In 2014, they described their control system hardware and software in this thread (http://www.chiefdelphi.com/forums/showthread.php?t=129574&highlight=971+control+system).
971's made a really cool robot as usual, I'm hoping to see it in person soon!
How much of this robot was made on your in house CNC router? Great looking robot by the way
I'd guess they're also controlling acceleration for both arms. In 2014, they described their control system hardware and software in this thread (http://www.chiefdelphi.com/forums/showthread.php?t=129574&highlight=971+control+system).
I remember reading that thread now, totally forgot it existed. Thanks for the reminder. Would be fun to do this stuff on an FRC robot if we ever found the time..
AustinSchuh
27-03-2016, 01:23
The first 4 states are obvious: main arm position and speed, shooter mini-arm position and speed. What are the other two? How'd you go about tuning that? With that level of complexity, I imagine you had to go to model-based control?
Model based control is required :) Once you get the hang of it, I find it to let us do cooler stuff than non-model based controls. We plot things and try to figure out which terms have errors in them to help debug it.
The states are:
[shoulder position; shoulder velocity; shooter position (relative to the base), shooter velocity (relative to the base), shoulder voltage error, shooter voltage error]
The shooter is connected to the superstructure, but there is a coordinate transformation to have the states be relative to the ground. This gives us better control over what we actually care about.
The voltage errors are what we use instead of integral control. This lets the kalman filter learn the difference between what the motor is being asked to do and what actually is achieved, and lets us compensate for it. If you work the math out volts -> force.
Easily one of the coolest robots I have seen yet. Could you expand some on closed loop driving? Also, what is the reasoning behind using two cameras?
AustinSchuh
27-03-2016, 14:58
Could you expand some on closed loop driving? Also, what is the reasoning behind using two cameras?
We are using the gyro and encoders to do what essentially boils down to proportional velocity controller on top of feed-forwards. In a year where the tire dynamics play such a big part in how the robot responds, having a little bit of feedback to help the driver deal with the fast dynamics of a bouncing robot helps a lot.
We are currently just averaging the angles from the two cameras, but when we get a bit more time, we are going to work on using the pair of cameras to do stereo distance measurement. We have a proof of concept working on a laptop, but haven't made it work reliably yet on a robot.
thatprogrammer
27-03-2016, 15:02
We are using the gyro and encoders to do what essentially boils down to proportional velocity controller on top of feed-forwards. In a year where the tire dynamics play such a big part in how the robot responds, having a little bit of feedback to help the driver deal with the fast dynamics of a bouncing robot helps a lot.
Are you using this controller to help drive straight on top of obstacles only, or whenever you drive?
Also, what RPM and ball compression are you running? You shoot the balls out like bullets!
AustinSchuh
27-03-2016, 17:56
Are you using this controller to help drive straight on top of obstacles only, or whenever you drive?
We use that controller for all teleop control. We use a more complicated controller for autonomous trajectory following. There are many corner cases in teleop driving that are hard to design for. A simple velocity controller has proved to work really well without really having any corner cases.
Also, what RPM and ball compression are you running? You shoot the balls out like bullets!
Sorry, you'll have to wait until after the season to get an answer on that one. We are happy to share, but some things took a lot of prototyping and design to figure out and are too easy to reproduce.
nuclearnerd
27-03-2016, 18:35
It looks like you have a 775 pro on the "elbow" gearbox, and maybe the shoulder too? No spring to balance the load either. If so, how are you holding position without burning out the motor? Is there a brake I can't see?
(It's my understanding that the 775 pros will melt in just a few seconds when stalled at more than 6V)
AustinSchuh
27-03-2016, 20:41
It looks like you have a 775 pro on the "elbow" gearbox, and maybe the shoulder too? No spring to balance the load either. If so, how are you holding position without burning out the motor? Is there a brake I can't see?
We did some pretty extensive calculations on heat dissipation. The shoulder is designed to hold the entire weight of the arm for a couple minutes per Vex Pro's charts. We designed for somewhere around 4 volts. We also run a fan on the motor to pull heat away, though that helps with longer runs and only pushes the limit out a little ways.
We don't leave the arm up for long periods, and land it on the bellypan when we are done shooting. This helps keep it cooler.
The rest of the joints run cool and aren't a concern due to the low loads.
Tom Bottiglieri
28-03-2016, 21:10
This robot is so fun to watch. Great job!
This is one of the most thoroughly engineered robots I have seen yet.
Jus_McG-3193
29-03-2016, 10:14
It appears to me every year that I see a new 971 robot it always impresses me tremendously. In particular there always seems to be a mechanism that blows me away. The pivoting of the shooter and arm assembly specifically this year, although I'm positive there's more to amaze.
Congratulations on your impressive win at Sacramento and best of luck to you all at SVR as well as Champs!
rvgrossman
12-05-2016, 14:15
Are you guys going to make the CAD available anytime soon? We're dying out here.
markmcgary
12-05-2016, 14:28
Are you guys going to make the CAD available anytime soon? We're dying out here.
And software would be nice as well. We are very inspired by your 'closed-loop driving' and would love to learn more.
kevincrispie
12-05-2016, 17:08
There are a number of resources in the works that the team plans on releasing. We need a bit of time to get everything together. However, we have released pictures from the 2016 build season. This includes competition photos, prototyping/fabrication photos, and close ups of the robot. Hopefully this can provide some guidance while the team works to get our software and mechanical documentation ready for public viewing.
The pictures can be viewed on the team Picasa page, in folders marked "2016".
https://picasaweb.google.com/117769834305511597729/
Landonh12
12-05-2016, 20:22
There are a number of resources in the works that the team plans on releasing. We need a bit of time to get everything together. However, we have released pictures from the 2016 build season. This includes competition photos, prototyping/fabrication photos, and close ups of the robot. Hopefully this can provide some guidance while the team works to get our software and mechanical documentation ready for public viewing.
The pictures can be viewed on the team Picasa page, in folders marked "2016".
https://picasaweb.google.com/117769834305511597729/
Is there a resource showing how you guys manage your code and how you deploy it to the robot? Also what editor do you use? I've been trying to learn how to code our robot in Java/C++ and don't like using Windows/Eclipse. IIRC I saw a page on your website saying you use Debian and SVN.
AustinSchuh
12-05-2016, 21:37
Is there a resource showing how you guys manage your code and how you deploy it to the robot? Also what editor do you use? I've been trying to learn how to code our robot in Java/C++ and don't like using Windows/Eclipse. IIRC I saw a page on your website saying you use Debian and SVN.
There's a README in the top level folder of the code we'll release shortly that explains all that. We haven't met since CMP, so we haven't finished reviewing the last couple changes made at CMP yet to get them merged, so we can't release said code...
We use Bazel (http://bazel.io/) to build code, Debian Linux as our development platform, Git, and Gerrit. Bazel lets us cross compile easily and provides a hermetic environment so we are very certain that everyone on the team is building the same bits. Officially, we don't take a position in the editor wars, but most of the students use what the mentors use, which is VIM and bash.
thatprogrammer
12-05-2016, 21:48
It was awesome getting to see this robot in person and getting to talk to team members about it! Unfortunately you were all quite busy the few times I visited your pit, so I couldn't ask too many questions.
Sorry, you'll have to wait until after the season to get an answer on that one. We are happy to share, but some things took a lot of prototyping and design to figure out and are too easy to reproduce.
Now that the season is over, would you be willing to provide some more details on your shooter? :]
Landonh12
12-05-2016, 22:14
Officially, we don't take a position in the editor wars, but most of the students use what the mentors use, which is VIM and bash.
I've been using VIM and bash on my Mac. I like using it, but I usually go between that and making sure that Eclipse likes what I'm writing. Thanks for that bit, I'm looking forward to the full code release!
Travis Schuh
14-05-2016, 14:03
Our CAD is now available at http://frc971.org/cad
It takes the server a little over 10 seconds to start the download once you click the link so please be patient and don't hit the link twice.
Sorocan534
14-05-2016, 14:26
Our CAD is now available at http://frc971.org/cad
It takes the server a little over 10 seconds to start the download once you click the link so please be patient and don't hit the link twice.
You guys are the best for providing this, can't wait to take a look at it!
Our CAD is now available at http://frc971.org/cad
It takes the server a little over 10 seconds to start the download once you click the link so please be patient and don't hit the link twice.
We're having trouble importing the .iges file to both Solidworks and Inventor. Is there a step file or something similar we could use?
Travis Schuh
14-05-2016, 17:54
We're having trouble importing the .iges file to both Solidworks and Inventor. Is there a step file or something similar we could use?
We had a mixup when putting the file on the server and got the wrong extension on it. The file was saved as step, you should be able to open it fine if you rename the extension to .step. We are working on getting the version on the server fixed. Sorry for the inconvenience.
We had a mixup when putting the file on the server and got the wrong extension on it. The file was saved as step, you should be able to open it fine if you rename the extension to .step. We are working on getting the version on the server fixed. Sorry for the inconvenience.
We just got it working, thanks. :P
MichaelSchuh
14-05-2016, 19:57
We had a mixup when putting the file on the server and got the wrong extension on it. The file was saved as step, you should be able to open it fine if you rename the extension to .step. We are working on getting the version on the server fixed. Sorry for the inconvenience.
The CAD now downloads as a STEP file. Sorry about the mistake.
Michael
thatprogrammer
14-05-2016, 21:02
Wow, there are a lot of details about this robot that I hadn't noted previously! I have a lot of questions based on the cad I looked at, but I'll post the main ones here:
What is the purpose of the sideways mounted 775 pro gearbox near the gearbox that actuates the intake up and down? I see it appears to drive some sort of bar near the drive train.
How did you achieve such smooth motion on the up and down movement of the intake despite it only being powered on 1 side?
Thanks for releasing your cad! The level of detail and compactness found on this robot has left me speechless.
Travis Schuh
14-05-2016, 21:43
Wow, there are a lot of details about this robot that I hadn't noted previously! I have a lot of questions based on the cad I looked at, but I'll post the main ones here:
What is the purpose of the sideways mounted 775 pro gearbox near the gearbox that actuates the intake up and down? I see it appears to drive some sort of bar near the drive train.
How did you achieve such smooth motion on the up and down movement of the intake despite it only being powered on 1 side?
Thanks for releasing your cad! The level of detail and compactness found on this robot has left me speechless.
That gearbox is our hanger winch. One end of the string went there, the other was tied on the other side. That way we didn't have to have a winch gearbox/line centered on the robot and got a 2:1 reduction out of the pulley.
The 2" tube welded solidly to the 1x2 in the intake adds a lot of stiffness. Also, we motion profile the intake so that will reduce the torsional load across the intake.
thatprogrammer
14-05-2016, 21:58
That gearbox is our hanger winch. One end of the string went there, the other was tied on the other side. That way we didn't have to have a winch gearbox/line centered on the robot and got a 2:1 reduction out of the pulley.
The 2" tube welded solidly to the 1x2 in the intake adds a lot of stiffness. Also, we motion profile the intake so that will reduce the torsional load across the intake.
Thanks for the answers!
One additional question: What prompted you to switch to chain drive this year?
Travis Schuh
14-05-2016, 22:14
Thanks for the answers!
One additional question: What prompted you to switch to chain drive this year?
The 5mm GT2 belts and #25 chain have comparable load ratings. To use a belt in the same application would require a huge sprocket, which would be heavy, wide, and require a lot of custom work. Much easier to buy stuff that is designed to bolt together.
Michael Hill
15-05-2016, 00:31
I love the use of an actual ratcheting wrench....and the billion 775pros.
Also, I'm curious...what is the Encoder Shielding Chassis Ground Assembly for?
Finally, what went into the design decisions of making your own gearboxes versus using something like the VersaPlanetary on some subsystems (like your intake)?
Thank you guys for releasing everything! Hopefully we can take a look at all of it and learn from the amazing things you guys do.
The pictures can be viewed on the team Picasa page, in folders marked "2016".
https://picasaweb.google.com/117769834305511597729/
Also, happy to see you guys got some photos of when our team visited (https://picasaweb.google.com/117769834305511597729/6251736619347043105#6251736889606003410)!
AustinSchuh
15-05-2016, 00:53
Also, I'm curious...what is the Encoder Shielding Chassis Ground Assembly for?
In 2014, we had a lot of trouble with EMI and cross-talk on the robot. Since then, we've spent a lot of time and energy worrying about EMI. We run a 6 pin shielded cable for our encoder and pot combo (ground, power, index pin, A, B, and pot value). To properly terminate the shield, you need to ground it to your frame in such a way that the EMI actually makes it to the frame. That assembly does that.
We had a number of pinched wires this year, so we'll be looking at revisiting that design. You'll see something like that on our robot next year though.
Finally, what went into the design decisions of making your own gearboxes versus using something like the VersaPlanetary on some subsystems (like your intake)?
Aren keeps trying to get us to run VP's in places.
We've designed enough custom gearboxes that they aren't risky, and we can generally get exactly what we want when we design our own.
For the intake, we've gotten really good at timing belt reductions, and the single reduction from there would have been required anyways since we needed to power the gearbox from the middle of the shaft anyways. The VP wouldn't have actually made it much simpler.
For the arm (intake up/down, shooter angle, and shoulder angle), we wanted to control the backlash. The backlash at the end of the arm is about 1/16th of an inch, which is phenomenal.
We were actually really trying to use a VP for our climber, but we couldn't get the packaging to work :(
We've been running timing belt reductions as the first stage since 2013, and have really liked it. They are much quieter, and we don't see wear.
aphelps231
16-05-2016, 11:00
Would someone be willing to speak a bit on how power is managed and brownouts are prevented on this robot? Maybe it's a non-issue, but with the amount of motors on the robot and the large forces they (seemingly) must endure, it seems some kind of smart power management would be necessary.
Maybe the answer to this is related to that of the above, but what drove the decision behind which reductions and how many motors to put on each gearbox?
This bot has truly gotten me excited to play with big machines and improve my CAD skills in college. I'm shamelessly jealous of how much 971's students must get to learn about machining and design before they even graduate high school. :yikes:
Max Boord
16-05-2016, 11:49
Any plans to release the cad of the first generation intake? It had some cool sheet metal features I would like to look at.
Also what performance gains (if any) where found by switching the drive and intake wheels for worlds. I had heard that they where switched for weight but the intake appeared to be a little faster at centering the ball as well.
I too am very interested in the brownout management.
I noticed looking through the pictures that you started out with a significantly different collector. Judging by the complexity and completeness of the robot, it looks like a pretty late switch to the mecanum style collector. If you could share your reasons and methods, I would love to hear them, and I think a lot could learn. I know I watched videos of your 2012 collector several dozen times a few years ago. Still one of the faster collector/sorter/columnizer mechanisms I have watched.
AustinSchuh
17-05-2016, 00:04
I noticed looking through the pictures that you started out with a significantly different collector. Judging by the complexity and completeness of the robot, it looks like a pretty late switch to the mecanum style collector. If you could share your reasons and methods, I would love to hear them, and I think a lot could learn. I know I watched videos of your 2012 collector several dozen times a few years ago. Still one of the faster collector/sorter/columnizer mechanisms I have watched.
I'll answer your easy question first and come back to the other ones when I'm not working.
Our initial intake worked well on the bench when prototyped. We tried lifting/lowering the rollers by an inch on the bench, and it seemed to work "ok". When we finally built it, we learned that pneumatic tires bounce even more than we thought, and we would drive up on balls too easily. That was compounded with it being heavy (~16 pounds), and a bit slower than we wanted. The weight was causing the chain to stretch way too fast and even start to yield when we were going over the bumps. (We learned this year to run 35 chain in more spots). It all added up to an intake which wasn't what we wanted.
There was a video floating around somewhere on CD in like week 4 of a mecanum intake which worked, but wasn't as fast as we knew that it could be. That, the sheer simplicity of it, and 118 shipping with one made us switch over immediately after ship. We slapped together a prototype on our practice robot, worked out the ideal roller placement, and then built it. It took us a little bit over a week to pull the whole thing together from design to full implementation. It was during that time that we learned that a traction wheel in the middle would cause it to jam, but an omni wheel in the middle let it center nicely as the ball was coming into the robot. We figured that out by pulling dozens of balls into our prototype and high-speed videoing it while trying to jam it. Unfortunately, this meant that we needed a separate CDF/Portcullis mechanism, since our old intake could open them, which added more mechanisms and complexity.
I think it was very effective and we will consider doing something similar in the future. I liked our 2012 intake slightly more than this one, but I really can't complain. 2012 (which I saw a number of teams do variations of) would have required some crazy folding geometry for going under defenses. There were very few times when we contacted a ball but didn't grab it.
MichaelBick
17-05-2016, 15:33
Whats the purpose of have two different encoders in this gearbox:
https://lh3.googleusercontent.com/-RCWSVAGnFkU/VtPsX9hINrI/AAAAAAAAeL4/VIsjIrRR6ykTHJPwWrVfeSuIxVwQQptJgCCo/s800/arm%2Bgearbox%2B-%2Bassembly%2B%252820%2529.jpg
Greg Woelki
17-05-2016, 15:56
Will a 2016 code snapshot be released as in years past?
Whats the purpose of have two different encoders in this gearbox
Possibly for some sort of clever backlash management in code, but you should obviously wait to hear from a 971 member.
AdamHeard
17-05-2016, 15:58
Whats the purpose of have two different encoders in this gearbox:
https://lh3.googleusercontent.com/-RCWSVAGnFkU/VtPsX9hINrI/AAAAAAAAeL4/VIsjIrRR6ykTHJPwWrVfeSuIxVwQQptJgCCo/s800/arm%2Bgearbox%2B-%2Bassembly%2B%252820%2529.jpg
The one on the left is a potentiometer (and do to limited # of turns has to be near the output).
The one on the right is an encoder. I know 971 goes out of their way to eliminate backlash as much as possible (like custom sized hex shafts), so putting it earlier in the reduction yields higher resolution (traded off against backlash, which in this case is minimized).
AustinSchuh
18-05-2016, 01:28
The one on the left is a potentiometer (and do to limited # of turns has to be near the output).
The one on the right is an encoder. I know 971 goes out of their way to eliminate backlash as much as possible (like custom sized hex shafts), so putting it earlier in the reduction yields higher resolution (traded off against backlash, which in this case is minimized).
Correct. The backlash on each of the joints is on the order of 1/16" at the end of the long arm. We have found more value in having high resolution for the encoder than having low backlash. Ask me again in a couple years, and we may have a different opinion...
We hook up the index pulse on the encoder so we get a very accurate pulse once per revolution. That pulse triggers DMA to capture the encoder value at that point in time. This lets us figure out the encoder value very accurately. Unfortunately, there will be something like 30 of these pulses through the range of motion on the arm. A slow moving filter estimates which one of these 30 pulses we saw, and uses that to zero within 1 encoder tick.
We started doing this in 2015, and haven't looked back. This takes all the hard work out of zeroing joints with hall effect sensors on our robot. We wrote the class to support this in 2015, and have been steadily improving it and adding features. All we need to do is to have the software exercise the joint until it passes an index pulse, or have a human move it past an index pulse while disabled, and we are then fully calibrated.
mman1506
18-05-2016, 02:03
Correct. The backlash on each of the joints is on the order of 1/16" at the end of the long arm. We have found more value in having high resolution for the encoder than having low backlash. Ask me again in a couple years, and we may have a different opinion...
We hook up the index pulse on the encoder so we get a very accurate pulse once per revolution. That pulse triggers DMA to capture the encoder value at that point in time. This lets us figure out the encoder value very accurately. Unfortunately, there will be something like 30 of these pulses through the range of motion on the arm. A slow moving filter estimates which one of these 30 pulses we saw, and uses that to zero within 1 encoder tick.
We started doing this in 2015, and haven't looked back. This takes all the hard work out of zeroing joints with hall effect sensors on our robot. We wrote the class to support this in 2015, and have been steadily improving it and adding features. All we need to do is to have the software exercise the joint until it passes an index pulse, or have a human move it past an index pulse while disabled, and we are then fully calibrated.
Would you mind ELI5, are you saying that you are able to zero the joints without the use of limit switch (hall effect switch, etc), a potentiometer or a hardstop?
AustinSchuh
18-05-2016, 02:06
Also what performance gains (if any) where found by switching the drive and intake wheels for worlds. I had heard that they where switched for weight but the intake appeared to be a little faster at centering the ball as well.
It was all for weight. Any other improvements were purely accidental. Thanks to 254 and 118 for their suggestions! There may have been a software update to speed the intake up (can't remember). We geared the intake and bottom kicker roller for 25+ FPS and then started out by running them at 8 volts (8/12 duty cycle) to take the nicer motor and make a less powerful motor out of it.
Would someone be willing to speak a bit on how power is managed and brownouts are prevented on this robot? Maybe it's a non-issue, but with the amount of motors on the robot and the large forces they (seemingly) must endure, it seems some kind of smart power management would be necessary.
It turned out to be a non-issue. We don't drive with the superstructure up, so that makes it such that most of the motors aren't in use most of the time. The intake is motion profiled, so that effectively current limits it in 99% of the cases, and none of the other motors are really loaded.
If that didn't turn out to be correct, we were ready to fix it in software. We would have current limited each control loop by looking at the velocity of the motor, backing out the BEMF voltage, and using that to limit the current. I think that was on the original plan, but wasn't an issue and got forgotten about.
4 CIMs in the drivetrain helps a lot as well.
Maybe the answer to this is related to that of the above, but what drove the decision behind which reductions and how many motors to put on each gearbox?
Here's our arm calculation spreadsheet. It is the most interesting one in terms of pushing the limits. We really should have either slowed it down or put a second motor on the joint for power dissipation. It was plenty fast and torquey with just 1 motor. We had to motion profile it to prevent the superstructure from setting up an ugly torsional mode when accelerating. 2015 taught us a lot about motion profiling, such that we no longer let any of our superstructure joints run without motion profiles.
https://docs.google.com/spreadsheets/d/1rKUOE-1gmGu8HZ3BIuj7aLaCpsABW5_u3HtRLLSV6LU/edit?usp=sharing
We look at time to make a characteristic move (assuming that the motor is working against gravity but at steady state), and the holding voltage/power required to hold the arm at that set point. VP released awesome charts about motor life at various holding voltages this year. We targeted a peak of 4 volts holding voltage on all our joints. I'm thinking next year we should drop that down to closer to 3 volts since we had to add fans to the shoulder motor, and replace it a couple times. We like to target ~1/2 second max for motions. Too much longer and you end up waiting on the robot too much.
We started out by putting 1 775 pro on each joint and looking at what that was going to mean in terms of power dissipation and speed. The analysis showed that everything was fast enough that way, and we didn't have to look any further.
Over the past couple years (really starting in 2014), we've been working on iterating our designs to remove common and known failure modes and weaknesses. We've focused on trying to not burn out motors, rate belts, chains and gears for the required load, and put weight into places where we see consistent failures. Our goal is to go through a season where we perform to our fullest potential without failure on the field.
One fascinating thing I learned this year is that for some subsystems, you should gain schedule your controller based on whether you are sourcing power from your motor and using that to drive your load, or pulling power out of your load with your motor. This flips the efficiency. If you assume that the efficiency reduces the torque of your motor by ~5% per reduction and hard-code that into your model, it essentially means that when the motor decelerates the load, the torque is reduced. The physics contradicts this. When you decelerate the load, the load is putting power into your motor, and the gearbox has losses during that transaction. The result is that accelerations reduce effective motor torque, and decelerations actually increase effective motor torque.
I bring this up because we had a lot of trouble tuning the arm controller. The only way we could get smooth behavior when both lifting and lowering the arm was to design an "accelerating" controller and a "decelerating" controller and switch between the two depending on whether we were accelerating or decelerating. It was really cool to finally figure that out. I've seen this for probably close to a decade (I remember struggling in 2005 to tune a controller to go up and down nicely), but hadn't ever gotten fully to the bottom of the issue and had a good physics explanation for what was wrong.
I'm not sure our switching logic is right, though it was better than no switching. This summer, I want to have someone analyze how we switch between the two controllers and make sure the transition is continuous. I've got summer project ideas to keep the students and myself busy all summer :)
(yea, yea, we are working on releasing our code. We just finished the last code reviews, and are working on hosting it correctly. Code quality is important!)
AustinSchuh
18-05-2016, 02:25
Would you mind ELI5, are you saying that you are able to zero the joints without the use of limit switch (hall effect switch, etc), a potentiometer or a hardstop?
I had to go look ELI5 up :P
We use a potentiometer and index pulse to zero each joint. We do not have any limit switches, and we've been known to not put in hard stops. In 2014, one of the hard stops was the cRIO...
Let me try to write out an example with a pretend elevator.
The encoder moves 0.1 meters per revolution. Someone went and calibrated the elevator and told you that at 0.0971 meters, they found an index pulse.
This means that as you lift and lower the elevator, you will see index pulses at 0.0971 meters, 0.1971 meters, 0.2971 meters, 0.3971 meters, 0.4971 meters (I think you see the pattern).
They also calibrate the potentiometer so that it reads out the approximate height. Also, pretend that it has like 0.02 meters of noise in the reading. So, if you are at 0.1 meters, you might see readings of 0.09, 0.1, 0.11, 0.12, 0.08. Welcome to real life. It sucks at times.
So, we initiate a homing procedure by telling the elevator to move 0.2 meters towards the center of the range of travel. The procedure needs to be designed to not break your robot, but move at least 0.1 meters to find an index pulse. As we are moving, we see a pulse. We then go immediately look at the pot, and it reads 0.3100 meters. The closes index pulse is 0.2971 meters, so we now know that whatever the encoder value was at the index pulse, it really should have read 0.2971 meters. So, compute that offset, and you are homed!
DMA is a really cool feature on the FPGA of the roboRIO where you can set up a trigger and cause sensors to be captured. We have configured it to trigger when an index pulse rises, and save the encoders, digital inputs and analog inputs. The FPGA does this within 25 nanoseconds. This lets us record the encoder and pot value at the index pulse.
The fun part comes when the noise on your potentiometer is ~0.05 meters. We see this on our subsystems. If you get unlucky, you might pick the wrong index pulse, and be off by 0.1 meters (!). We can fix this by filtering. The encoder should read what the pot reads, with an offset depending on where the system booted. You can take (pot - encoder) as the "offset" quantity and average that over a long period (2 seconds is what we use). Add that filtered value back to the current encoder value, and, assuming Gaussian noise and all that jazz, you will have removed enough noise to make everything work again.
More concretely, say we are sitting at 0.05 meters. We get the following encoder, pot readings.
Encoder, pot
0.0, 0.0
0.0, 0.1,
0.0, 0.06,
0.0, 0.02,
0.0, 0.08
(I'm a horrible random number generator, FYI)
From averaging it all together, it looks like the pot started out at a bit above 0.05, and the encoder is at 0. Then, we get the following measurement in: encoder, 0.05, pot, 0.16. Before, we would say that this is closer to the 0.1971 value, so we would round there. Now, we would say that since the encoder moved by 0.05, and we think the pot was around 0.05, we are likely at about 0.1. The nearest pulse is 0.0971, so that's the zero we actually saw.
Brian Selle
18-05-2016, 09:28
We use a potentiometer and index pulse to zero each joint. We do not have any limit switches, and we've been known to not put in hard stops. In 2014, one of the hard stops was the cRIO...
Wish we had done this in 2015. :) Do you do the calibration during disabledPeriodic (drive team cycles the arm), upon first motion, other? Is it done just once, at specified intervals, or continuously?
FarmerJohn
18-05-2016, 14:22
I had to go look ELI5 up :P
I must have been a dumb 5 year old. :yikes:
AustinSchuh
18-05-2016, 14:44
Wish we had done this in 2015. :) Do you do the calibration during disabledPeriodic (drive team cycles the arm), upon first motion, other? Is it done just once, at specified intervals, or continuously?
We've got our own pub-sub framework, so it's hard to map to WPILib concepts. The "human zeroing", where the robot is moved through the range of motion by hand, happens while disabled. The automated zeroing (robot has a sequence of actions that find index pulses) happens if the robot enters enabled mode but is not all zeroed. We rarely use the automated zeroing, but if the robot reboots without warning or someone forgets to zero at startup, we can continue the match.
Chris Mounts
18-05-2016, 20:04
Your CAD model is amazing!
A couple questions:
How many students / mentors are doing CAD?
At what point in build to you start fabrication?
Are your students learning CAD outside of robotics?
AustinSchuh
19-05-2016, 17:12
And software would be nice as well. We are very inspired by your 'closed-loop driving' and would love to learn more.
Have fun! We are happy to answer questions. We've been going with a larger git repo and keeping old robots alive, so you'll see 2014, 2015 and 2016 code. All 3 should work, though there may be some small bits of code rot in the two older robots.
http://frc971.org/content/2016-software
Any plans to release the cad of the first generation intake? It had some cool sheet metal features I would like to look at.
Also what performance gains (if any) where found by switching the drive and intake wheels for worlds. I had heard that they where switched for weight but the intake appeared to be a little faster at centering the ball as well.
You can check out the first version of the intake on our CAD download page (http://frc971.org/cad), we just posted it.
We orignially switched to the vex EDR mecanums because we needed the weight for the hanger, it ended up being somewhere around a pound or so lighter. The EDRs were a bit grippier; I didn't notice much of a difference in intaking performance between SVR and Champs, we didn't change our intake speeds. We also switched the drive wheels for weight purposes. We had a few issues with the drive wheels rubbing on our bellypan due to dents in the aluminum from the defenses, but we were also having some of those same problems with the first drive wheels.
Your CAD model is amazing!
A couple questions:
How many students / mentors are doing CAD?
At what point in build to you start fabrication?
Are your students learning CAD outside of robotics?
We have about 6-7 students contributing on CAD (varying greatly by year), and we had about 3 mentors very involved and about 3 more helping with CAD. For context we have about 35-40 students total, but only 20 or so coming consistently.
We send out our drivebase for sponsor manufacturing around week 2 and usually get a 2 week turn around on those parts. The rest of the in-house stuff we start after the completion of superstructure CAD, which is ideally the beginning of week 3, but often gets pushed back...
Some of our students choose to participate in our summer third robot project, where a lot of the focus is on developing CAD skills (not sure if that counts as outside robotics, but it isn't during the season), other students have taken intern positions where they use their CAD skills, some are enrolled in our school's engineering class where they learn basic CAD, and some do projects on their own for fun or to specifically learn SolidWorks.
thatprogrammer
21-05-2016, 00:34
I had some questions about your closed-loop driving as it seems very interesting!
Reading your code, it seems like you dynamically generate motion profiles to your goal (Calculated off of joystick input?)
How do you account for drift? We used a Spartan Board this year and saw something like 2 degrees of drift per a match. Knowing Austin, he wouldn't tolerate misalignment by 2 degrees :P
In what file are you actually running your overall robot code? I can't seem to find where you use your 2016 drive classes to actually run the closed-loop drive.
If you are generating a motion profile based on every joystick input (or every 5ms cycle?), how are you not overloading the roboRIO?
Thanks for the help!
Sorry if any of the questions are strange or ignorant, I'm not super familiar with C++ and some of the more advanced syntax and features of it. Following your code has been somewhat difficult because of that. (That said, it is written very well and is pretty well commented.)
You stated earlier that you get 1/16" backlash at the end of the long arm. My calculations put that at around 0.1 degrees of backlash at your shaft. How are you able to achieve such stellar precision using only Vex gears and chain?
thatprogrammer
21-05-2016, 00:47
You stated earlier that you get 1/16" backlash at the end of the long arm. My calculations put that at around 0.1 degrees of backlash at your shaft. How are you able to achieve such stellar precision using only Vex gears and chain?
I have heard a few whispers that you run custom sized shaft in order to help achieve such precision. Is this true?
AustinSchuh
21-05-2016, 01:27
I had some questions about your closed-loop driving as it seems very interesting!
Reading your code, it seems like you dynamically generate motion profiles to your goal (Calculated off of joystick input?)
There are 2 controllers in our drivetrain code (SSDrivetrain and PolyDrivetrain).
SSDrivetrain is used for more traditional driving (go 1 meter forwards). It has motion profiles on the inputs, which can be disabled if you want. This lets us send it a goal of +1 meter, and go work on other things while it does it. Or, it lets us feed the goal in directly when we are doing vision alignment and want the controller dynamics without any sort of profile.
PolyDrivetrain runs the teleop driving. It is mostly using feed forwards, but has a proportional loop to do small corrections. It understands the robot's physics, and uses that knowledge to do constant radius turns. The combination of the feed forwards and the feed back makes the driving experience pretty connected.
Both of these are fed by a 7 state Kalman Filter which estimates the positions, velocities, and disturbance forces of each side of the drivetrain. Controls can be split into 2 worlds, the estimation and control worlds. You need good sensors or algorithms to figure out what your system is doing, and then you can apply the algorithms. Once you've split the world this way, you can take a nice estimator, and use it to feed multiple controllers. Generally speaking, the controller ends up being a matrix to multiply into the error to get an output, which is stateless.
How do you account for drift? We used a Spartan Board this year and saw something like 2 degrees of drift per a match. Knowing Austin, he wouldn't tolerate misalignment by 2 degrees :P
Sorry to let you down, but we don't do anything special. 2 degrees/match is better than we need. Drift is only one of the metrics that matters. We are more concerned with bandwidth and linearity, and we have not been let down by the board on those fronts. (We've been using that gyro since 2012)
In what file are you actually running your overall robot code? I can't seem to find where you use your 2016 drive classes to actually run the closed-loop drive.
It's deceptively simple. //y2016/control_loops/drivetrain:drivetrain_main.cc
We have 4 robots (assuming I can count...) all driving with the same code. There is a configuration structure passed into the drivetrain controller class for each robot which contains the physics model for each robot and other configuration bits. This means that the year specific code is 100 lines, all of it boilerplate.
Take a look in //y2016/control_loops/python/ for the models of each of our subsystems.
We have our own framework for designing robot code. Our code is broken up into somewhere around 10 processes, each responsible for one part of the robot. 1 process is Autonomous mode, 1 process is the joystick code, 1 process is the hardware interface, 1 process is the drivetrain, 1 process is the shooter, 1 process is the vision udp listener, 1 process is the superstructure code, etc. Each of those processes communicate with our own PubSub message passing code. That means, for example, that the drivetrain will listen for Goal and Position messages, and publish Output and Status messages. (look in //frc971/control_loops/drivetrain:drivetrain.q for the actual definitions). This lets us be resilient to crashes, and keeps the interfaces between modules very well defined. For example, with 0 changes outside the hardware interface layer, we switched from running all our code on a BBB in 2014 to it all running on the roboRIO. We also are able to generate simulated Position messages and listen for simulated Output messages in our unit tests so that we can exercise the code without risking the real robot.
If you are generating a motion profile based on every joystick input (or every 5ms cycle?), how are you not overloading the roboRIO?
There are a bunch of ways to do motion profiles. 1 way is to pre-compute the entire profile, and then just execute it. This is common in well defined environments, or where calculating the profile dynamically is expensive. The other is to re-calculate the profile each time you want to make some progress.
We like the flexibility of re-calculating every time we want to move. To do this, we need to advance the requested position each cycle of the control loop. The only way to guarantee that is to do that in the control loop. (Last year, for our drivetrain, we had another process which was calculating the profiles. The added coordination overhead of interacting with that process was enough that we decided to push the profile down into the controller process.) This also means that the joystick code really just sends out goals when buttons are hit, and lets the underlying code do all the rest. //y2016/joystick_reader.cc has all the joystick code, and it's pretty easy to read.
The motion profiles are cheap to calculate. The 7x7 matrix multiplies do add up though :P
It turns out a large chunk of CPU on our robot goes into context switches between the various tasks. Our drivetrain code uses like 6% CPU, and our shooter uses like 3%. Most of the CPU on our robot either goes into the logger (20% ish), or into WPILib (50%).
Thanks for the help!
Sorry if any of the questions are strange or ignorant, I'm not super familiar with C++ and some of the more advanced syntax and features of it. Following your code has been somewhat difficult because of that. (That said, it is written very well and is pretty well commented.)
Ask more questions as you have them. There is a lot going on in our code which isn't obvious on the first read. Hopefully the background I've given so far will help you understand what is going on.
If you can get a linux box set up, do try to run the tests. It's pretty cool to be able to exercise our collision detection code, zeroing code, and other test cases. (bazel test //y2016/control_loops/...)
We are huge believers of code review and test driven development. I don't think we could do what we do without both of those, and it helps us involve students at all skill levels in the process while maintaining the quality and reliability we require of our code.
AustinSchuh
21-05-2016, 01:31
I have heard a few whispers that you run custom sized shaft in order to help achieve such precision. Is this true?
Yes. Every hex shaft in the 3 gearboxes that matter is custom sized to be about 4 thou oversized. The gears are a light press fit on the shaft.
We also run the first reduction as a well tensioned chain run. The chain has negligible backlash when well tensioned. You do need to tension it enough such that it doesn't go slack under acceleration and deceleration, or you add nonlinearities back into the control loops again...
Apparently you can do similarly good things with a soda can as a shim, or some other thin metal. We haven't tried that ourselves.
thatprogrammer
21-05-2016, 13:47
We have our own framework for designing robot code. Our code is broken up into somewhere around 10 processes, each responsible for one part of the robot. 1 process is Autonomous mode, 1 process is the joystick code, 1 process is the hardware interface, 1 process is the drivetrain, 1 process is the shooter, 1 process is the vision udp listener, 1 process is the superstructure code, etc. Each of those processes communicate with our own PubSub message passing code. That means, for example, that the drivetrain will listen for Goal and Position messages, and publish Output and Status messages. (look in //frc971/control_loops/drivetrain:drivetrain.q for the actual definitions). This lets us be resilient to crashes, and keeps the interfaces between modules very well defined. For example, with 0 changes outside the hardware interface layer, we switched from running all our code on a BBB in 2014 to it all running on the roboRIO. We also are able to generate simulated Position messages and listen for simulated Output messages in our unit tests so that we can exercise the code without risking the real robot.
I'm curious about how this is being done. What file or class is used to actually join everything together and work as the "main" class that feeds all the others to the roboRIO itself? How are the WPILIB classes incorporated into this? Thanks!
kylestach1678
21-05-2016, 16:03
I'm curious about how this is being done. What file or class is used to actually join everything together and work as the "main" class that feeds all the others to the roboRIO itself? How are the WPILIB classes incorporated into this? Thanks!
I'm obviously not on 971, but from what I can tell, the file you are looking for is y2016/wpilib/wpilib_interface.cc. It contains a bunch of helper classes to handle reading from the necessary message queues and writing to the WPILib objects as well as the equivalent of the WPILib Robot class.
kylestach1678
22-05-2016, 18:44
Austin, are there any plans for getting the changes made in your bazel distribution from merged back upstream? The code builds correctly using the build of bazel from your custom package repos but fails when using bazel 0.2.3.
AustinSchuh
22-05-2016, 23:26
Austin, are there any plans for getting the changes made in your bazel distribution from merged back upstream? The code builds correctly using the build of bazel from your custom package repos but fails when using bazel 0.2.3.
We are continually ahead and behind... We upstream our changes as fast as we can, but each upgrade seems to find more issues. Can you email/PM me the failure to keep this thread cleaner?
0.2.3 changed how runfiles work, and we haven't upgraded and fixed the issues. You probably need to go back to 0.2.2, and we'll work on upgrading soon. What we really should do is provide a //tools/bazel which pulls down the blessed bazel version. I think I sent you guys a pull request with that script in it. I'll see if we can get that merged into our repo.
kylestach1678
23-05-2016, 01:06
0.2.3 changed how runfiles work, and we haven't upgraded and fixed the issues. You probably need to go back to 0.2.2, and we'll work on upgrading soon. What we really should do is provide a //tools/bazel which pulls down the blessed bazel version. I think I sent you guys a pull request with that script in it. I'll see if we can get that merged into our repo.
My bad. You're right - downgrading to 0.2.2b fixed it. I suppose that's the downside to using a tool like bazel where breaking changes occur every few releases.
Now that I've gotten to take a better look at the code, I'm amazed (as usual) at the sheer amount of testing code present. 1335 lines of code in superstructure_lib_test.cc :ahh:! How are you able to get people to actually write automated test code? We've been trying to make unit testing a more important part of our development process, but this past year we weren't able to make that happen as much as we would have liked for two main reasons: a lack of programmer experience with tests and a general attitude that they are a chore to write. How do you go about training new programmers to write tests?
Thanks for open-sourcing all of this so quickly - I probably spend more time looking over 971's code every year than I should, but there's just so much to learn from it.
AustinSchuh
24-05-2016, 02:33
My bad. You're right - downgrading to 0.2.2b fixed it. I suppose that's the downside to using a tool like bazel where breaking changes occur every few releases.
And that's why, for work, we upstreamed https://bazel-review.googlesource.com/#/c/2620/ . On my list of things to do is to use that for 971. That way, the version of bazel that is used by a specific git commit is versioned with the rest of the code. That makes it such that you get to pick when you want to upgrade. Bazel is all about trying to version all your dependencies (compilers, code, libraries, etc) such that everyone will get a bitwise identical result. Google does a very similar thing internally.
For others following along, Bazel builds a namespace sandbox for each build step, only providing the dependencies that were specified for each step. This makes it very hard (though not impossible) to write build rules which aren't deterministic and repeatable. I've built code on one laptop, deployed it with rsync, re-built on another laptop, and rsync told me there was no work to be done.
Now that I've gotten to take a better look at the code, I'm amazed (as usual) at the sheer amount of testing code present. 1335 lines of code in superstructure_lib_test.cc :ahh:! How are you able to get people to actually write automated test code? We've been trying to make unit testing a more important part of our development process, but this past year we weren't able to make that happen as much as we would have liked for two main reasons: a lack of programmer experience with tests and a general attitude that they are a chore to write. How do you go about training new programmers to write tests?
Thanks for open-sourcing all of this so quickly - I probably spend more time looking over 971's code every year than I should, but there's just so much to learn from it.
1335 means we've gotten better at being more concise :P The real measure is number of test cases, of which I think there are 29 in that file. That's more than we've had before. We write what I would call more integration tests than unit tests, but I've never really been happy with any definitions I've come across.
We are really big on testing as a team, and I think that helps. I view the season as a success if I can get the students to believe that testing is and should be a required part of development. When new students come in, the older students and mentors focus on testing. Heck, that's one of the ways we'll get new students to start to contribute. "Please write foo and add a test for it." isn't an unheard of request, or "I think there may be a bug here, can you test it to see?". Everyone writes tests (I probably write more tests than most), so that prevents it from being a new kid thing. We have mentors who have a lot of experience in industry, and help make sure the code is designed to be tested. We also look for tests as part of the code review process. I'm much more willing to merge a commit if it has tests. (There are a small subset of people who have the permission to do the final submission for code that goes on the robot. This helps keep the code quality up and keep the robots functioning.)
A lot of that stems from us building complicated robots requiring complicated software. It is hard, if not impossible, to write complicated software without good testing. A lot of the bugs we've caught in testing would have destroyed our robot when we turned it on. We also start coding week 3, and finish assembling and wiring with only a couple days left in the build season. Good testing lets us start writing code before the hardware is available, and test out all the tricky corner cases that either rarely come up in real life, or are really hard to tell if they are performing correctly. I won't let the robot be enabled until we have tests verifying that all the traditional problem spots in the code work (zeroing, saturation, etc). The end result is pretty close to 100% line coverage, though we don't bother to measure it. This makes testing a required part of our robot code. Our intake this year took 0.35 seconds to move 90 degrees with 1/2 horsepower. If something goes wrong in that code, you can't disable it fast enough to save the robot. That's pretty scary, and really gets you to think twice about code quality.
Honestly, one of my favorite days of the build season is the day when we run the code for the first time. There are normally a couple tuning issues, but nothing really major, and the code tends to "just work". It is a very deliberate process of verifying that sensors all work, then verifying that the motors all spin the right direction, calibrating all the hard stops, soft stops, and zero points, and then slowly bringing the subsystems up with increasing power limits. I like to make sure people come for the robot bringup, since it really shows off the power of good testing.
My experience is that resistance/hesitation to test code comes from it both being hard to do, and something that isn't strictly required for small projects. If you want to write good tests, you need to design your code so that you can do dependency injection where needed, and expose the required state. This takes explicit design work up front. The problem is that for small projects, testing slows you down. You can run the 3 tests manually faster than you could write the test code. As the project size and complexity grows, testing starts to pay back off. FRC robots are on the edge of requiring tests.
Integrating good testing into your team is as much a knowledge problem as a cultural problem. There's a classic story where a major web company was unable to make a release of their product for 6 months because they weren't sure that if they released it, it would work. They couldn't risk it. They fixed that by hiring people to turn the culture around, training everyone on how to test and the importance of testing. You need to have your mentors and leaders buy in to automated testing, and drive it. They need to lead by example. Take your code from this year, and figure out how to modify it to be testable, and then go write some tests for it.
Tom Bottiglieri
24-05-2016, 13:00
Most of the CPU on our robot ... into WPILib (50%).
:(
Greg Woelki
30-05-2016, 20:35
Yes. Every hex shaft in the 3 gearboxes that matter is custom sized to be about 4 thou oversized. The gears are a light press fit on the shaft.
We also run the first reduction as a well tensioned chain run. The chain has negligible backlash when well tensioned. You do need to tension it enough such that it doesn't go slack under acceleration and deceleration, or you add nonlinearities back into the control loops again...
Apparently you can do similarly good things with a soda can as a shim, or some other thin metal. We haven't tried that ourselves.
How do you go about manufacturing the oversized shafts? Also, how tight are the interference fits? Hand tight? Arbor press tight?
mr.roboto2826
31-05-2016, 12:39
How did you guys go about the development of your 2 (side by side) wheeled shooter? From the outside looking in it looks fairly simple, but I would assume hours of tweaking went into it. Could you enlighten us as to how the shooter went through its development stages?
AustinSchuh
01-06-2016, 02:06
How did you guys go about the development of your 2 (side by side) wheeled shooter? From the outside looking in it looks fairly simple, but I would assume hours of tweaking went into it. Could you enlighten us as to how the shooter went through its development stages?
We have a couple pictures on our Picassa page, but the story helps tie them together.
We did a bunch of CAD sketches to try to determine packaging. We quickly discovered that to make it all package like we wanted, a side-by-side wheel shooter helped a lot. That info was then fed to the prototyping team, and they set to work.
The first job was to build a CIM + wheels module. By random chance, we picked 1 CIM and 2 wheels. Based on some old math, we picked a reasonably high surface speed, and machined the module on the router.
We then attached the module to a bunch of 8020, and started doing a parameter sweep.
https://lh3.googleusercontent.com/-_hYWVOswP38/VpdWPAZLBTI/AAAAAAAAcuQ/SjhpXnSt8TA-chDJWL_gBXCOb-GdlXTpACCo/s1440/proto%2Bshooter%2BB%2B%25288%2529.jpg
The obvious parameters (for us) to try were compression, and surface speed. Once the shooter was making shots into the goal reliably, we started collecting ball landing position data. They were using a pulse counter on the Fluke multimeter and reflectance sensor to count RPM to dial the RPM in accurately for the tests.
https://lh3.googleusercontent.com/-u35k4N9wXpU/VqXHsjXN3jI/AAAAAAAAdF8/zDCuHEjxInQBpyPiTWaJlTpv1okWbpZ8gCCo/s1152/shooter%2Bproto%2BB%2Bdata%2Bcollection%2B%25284%2 529.jpg
As you increase compression, there is a compression level where the spread shrinks, and then starts to increase. We picked that point.
The prototyping team didn't the standard deviation was quite good enough, so they kept working at it. Someone came up with the idea to offset the wheels vertically to put a little spin on the ball. That reduced the spread far enough that they were happy with it, and we shipped it with those numbers exactly. We haven't had to touch any of the shooter parameters all season, which is wonderful.
mr.roboto2826
01-06-2016, 11:16
The prototyping team didn't the standard deviation was quite good enough, so they kept working at it. Someone came up with the idea to offset the wheels vertically to put a little spin on the ball. That reduced the spread far enough that they were happy with it, and we shipped it with those numbers exactly. We haven't had to touch any of the shooter parameters all season, which is wonderful.
Would you be able to go in depth a little more as to your prototyping process and iteration? I noticed in the photo you had a rail on the bottom of your shooter, but your final design had none. Also could you explain the pneumatic setup on your feeder into the shooter?
Travis Schuh
01-06-2016, 21:16
How do you go about manufacturing the oversized shafts? Also, how tight are the interference fits? Hand tight? Arbor press tight?
We shoot for what I think is described as a transitional fit (+- a couple tenths off of nominal). We used to just sneak up on it and test fit it, but this year we got access to an optical comparator at a sponsor's lab and measured the hex size on the 1/2 hex VP broach to be ~ .5045 and the 3/8 hex VP broach to be ~.3765 (I would verify your own numbers if you choose a similar route). We ended up needing a light press to assemble, but it was easy to remove the parts when we did end up having to change a ratio. One trick we found was that it is important to put a radius on the edges of the hex to account for the radius on the broach. The numbers we cut to should be in our CAD.
We machined them using a 5C collet fixture on a HAAS mini mill (PN 3174A42 on McMaster or equivalent). This puts the shaft vertically. We machined it primarily using with a 2" LOC 1/2"D carbide endmill, and designed the shafts so that they could fit in two setups of this. We started with 5/8 precision ground 7075. We would start with a blank of known length (so we could use collet stops), machine one side, flip it over and hold onto some section that was either previously machined or a section that was designed to be left un-machined, and then machine the back side. Last year we started using slitting saws to put the snap ring groves in during the same operation, and that has been a nice time saver. Another trick we learned is it is really worth while to have a finisher endmill that you use just for the finish pass so that you don't end up with the bottom of each profile being a little tight due to the end of the endmill being more worn. We haven't had any issues of runout, although it is a concern.
Overall, I estimate that we spend 1/2 to 3/4 of a day machining all the critical shafts for both robots. The setup and CAM changeover goes relatively quick once the first part is setup. We find the time savings on the design and fiddle factor to be worth the effort.
Your last 3 robots have been extremely ambitious, unique, and well done. Why are your robots so different, what are you doing that's different than other teams? During brainstorming when someone proposes something like your 2015 robot everyone would throw it out immediately. How do encourage such unique thinking, and when you move into the design process how do you know you'll be able to pull off your crazy robots?
AustinSchuh
02-06-2016, 01:53
Would you be able to go in depth a little more as to your prototyping process and iteration? I noticed in the photo you had a rail on the bottom of your shooter, but your final design had none.
The rail in our prototype was to help feed the ball in consistently. It was added quickly after hand-feeding proved to be not reliable enough.
We started by controlling every variable possible. Once we had something that performed well enough, we started looking at the CAD and what the final solution was going to look like. We then measured the prototype and tried to make the model match. Whenever we found a place where we wanted the model to diverge from the prototype, we tweaked the prototype to match the proposed design and re-ran our tests. Our final design has a plate holding the ball until it is grabbed by the wheels, which serves the same purpose. We pulled the bar back until it was just about as long as the final design wanted, and verified that it still worked.
Also could you explain the pneumatic setup on your feeder into the shooter?
That piston linkage alone took an entire weekend of work. Open up the model and take a look. The left-right spacing is killer, especially since you want a ~3 pound grabbing force.
This year was different in that you didn't need to shoot multiple balls. We chose to use that by designing a shooter to hold one ball very securely and load it very consistently. That pointed us towards designing something which grabbed the ball in a "cage" with pistons and linkages, and some sort of piston loading mechanism. After a couple conceptual iterations in discussions, someone proposed having 2 links where the links were driven relative to each other to grab and release the ball, and the pair of links rotated together to feed the ball into the flywheels. We very carefully worked through all the geometry in solidworks sketches, figured out all the components, and then worked on finalizing the design. We tried at least 4 different piston models before we found a piston that would work.
I'm not sure I explained the pistons the best. Ask for clarification where I wasn't clear enough.
AustinSchuh
03-06-2016, 00:47
Your last 3 robots have been extremely ambitious, unique, and well done. Why are your robots so different, what are you doing that's different than other teams? During brainstorming when someone proposes something like your 2015 robot everyone would throw it out immediately. How do encourage such unique thinking, and when you move into the design process how do you know you'll be able to pull off your crazy robots?
That's a hard one to answer, kind of like trying to provide a recipe for innovation...
One of the things that makes 971 unique is that we are willing and able to move significant mechanical complexity into software. 2014 and 2015 were good examples of this. 2016 wasn't as crazy from a software point of view, but we also have gotten good at making robots work. We have strong software mentors and students with experience in controls and automation along with strong mechanical mentors and students to build designs good enough to be accurately controlled. The mechanical bits make the software easy (or at least possible).
We tend to start with a list of requirements for the robot, quickly decide what we know is going to work from previous years (and actually start designing it in parallel), and then sketch out concept sketches of the rest. We do a lot of big picture design with blocky mechanisms and sketches to work out geometry and the big questions. We then start sorting out all the big unknowns in that design, and working our way from there to a final solution. I think that a lot of what makes us different is that we impose interesting constraints on our robots, and then work really hard to satisfy them. We come up with most of our most out of the box ideas while trying to figure out how to make the concept sketch have the CG, range of motion, tuck positions, and speeds that we want before we get to the detailed design.
In 2014, we wanted to build something inspired by 1114 in 2008. We put requirements on that to be able to quickly intake the ball and also be able to drive around with the claw inside our frame perimeter ready to grab the ball. We tend to get to the requirements pretty quickly, and then start figuring out the best way to achieve them. After a bunch of challenging packaging questions, someone proposed that rather than using a piston to open and close the claw, we could just individually drive the two claws with motors. That brakethrough let us make our packaging requirements work much easier, and ended up being a pretty unique design. That got us comfortable building robots with more and more software.
In 2015, we were pretty confident that we didn't need to cut into the frame perimeter. Unfortunately, by the time we had determined that that requirement was hurting us, we had already shipped the drive base. We spent a lot of time working through various alternatives to figure out how to stack on top of the drive base and then place the stack. In the end, the only way we could make it work was what you saw. We knew that we were very close on weight, and again used software to allow us to remove the mechanical coupling between the left and right sides to reduce weight. We were confident from our 2014 experience that we could make that work. I like to tell people that our 2015 robot was very good execution of the wrong strategy... We wouldn't build it again, so maybe you guys are all smarter than us there ;)
2016 fell together much easier than I expected. It had more iteration than we've ever done before (lots of V2 on mechanisms), which helps it look polished. Honestly, most of the hard work in 2016 was in the implementation, not the concept. We wanted a shooter that shot from high up, and the way to do that was to put it on an arm.
We are getting to the point where we have a lot of knowledge built up around what fails in these robots and what to pay attention to. That part has just taken a lot of time and a lot of hard work. We don't spend much time debating where to do low backlash gearboxes, or figuring out how to control or sense various joints. Sometimes, I think we design the robots we design because we over-think problems and then come up with solutions to them. We work through a lot of math for gearbox calculations, power usage, etc, and do some basic simulations on some of the more critical subsystems. We also do a gut check to make sure that we think the subsystems will work when we build them, and we have good enough prototypes to prove out anything we are uncertain about.
AirplaneWins
07-06-2016, 19:17
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
Schroedes23
09-06-2016, 13:47
Can you reveal the secret of how you dealt with the changing compression of the boulders?
Travis Schuh
10-06-2016, 01:16
Can you reveal the secret of how you dealt with the changing compression of the boulders?
We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.
Travis Schuh
10-06-2016, 01:24
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
I can't quite speak for our vision team on all of the implementation, but I can fill in some high level details.
We do have two cameras. There was an early thought to use them to do stereo (do separate target recognition in both cameras, and get depth from the offset distance), and we had a bench prototype of this that had good preliminary results. We ended up not needing accurate depth info, so the two cameras were just used for finding the center of the goal. We could have done that with one camera mounted centered, but that is easier said than done.
We were using the jetson for vision processing and were happy with its performance.
AustinSchuh
12-06-2016, 01:32
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
Sorry for the delayed response. Life got in the way of robots again :rolleyes:
As Travis said, we wanted to do stereo, but didn't get around to verifying that it worked well enough to start using the distance that it reported. One of the side effects of stereo cameras was that we didn't need to deal with the transforms required to deal with the camera not being centered. Our shooter didn't have any space above or below the ball for a camera. The bottom of the shooter rested on the bellypan, and the top just cleared the low bar.
We did the shape detection on the Jetson TK1, and passed back a list of U shapes found to the roboRIO over UDP in a protobuf, including the coordinates of the 4 corners for each camera. We didn't find that we needed to do color thresholding, just intensity thresholding, and then shape detection. This ran at 20 hz, 1280x1024 (I think), all on the CPU. The roboRIO then matched up the targets based on the angle of the bottom of the U.
We were very careful to record the timestamps through the system. We recorded the timestamp that v4l2 reported that the image was received by the kernel, the timestamp at which it was received by userspace on the Jetson, the timestamp it was sent to the roboRIO and the timestamp that the processed image was received on the roboRIO. The let us back out the projected time that the image was captured on the Jetson in the roboRIO clock within a couple ms. We then saved all the gyro headings over the last second and the times at which they were measured, and used those two pieces of data to interpolate the heading when the image was taken, and therefore the current heading of the target. This, along with our well tuned drivetrain control loops, let us stabilize to the target very quickly.
Ask any follow-on questions that you need.
AustinSchuh
12-06-2016, 01:34
We didn't really have a secret, other than the double wheeled shooter seemed to be not very sensitive to them (consistent to what we noticed in prototyping). We also had a pretty flat shot (helped by the high release point and a fast ball speed), so our shot accuracy was not as sensitive to variations in ball speed.
This was also helped by our prototyping team spending significant time figuring out which compression seemed to have the least shot variation. They spent a lot of time shooting balls and measuring the spread.
ranlevinstein
14-06-2016, 09:39
Model based control is required :) Once you get the hang of it, I find it to let us do cooler stuff than non-model based controls. We plot things and try to figure out which terms have errors in them to help debug it.
The states are:
[shoulder position; shoulder velocity; shooter position (relative to the base), shooter velocity (relative to the base), shoulder voltage error, shooter voltage error]
The shooter is connected to the superstructure, but there is a coordinate transformation to have the states be relative to the ground. This gives us better control over what we actually care about.
The voltage errors are what we use instead of integral control. This lets the kalman filter learn the difference between what the motor is being asked to do and what actually is achieved, and lets us compensate for it. If you work the math out volts -> force.
First of all your robot is truly amazing!
I have a few questions about your control.
1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands.
2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form?
3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error?
4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer?
5.How did you tune the Q and R matrices in the kalman filter?
6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors?
7.How did you create 2 dimensional trajectories for your robot during auto?
8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense?
Thank you very much! :)
AustinSchuh
15-06-2016, 01:54
First of all your robot is truly amazing!
I have a few questions about your control.
1.I have read about your delta-u controller and I am not sure if I understood it correctly and I would like to know if i got it right. You have 3 states on your state space controller which include position , velocity and error voltage. you model it as (dx/dt) = Ax + Bu and u is the rate of change of voltage. Then you use pole placement to find the K matrix and then in your controller you set u to be -Kx. Then you estimate the state from position using an observer. you use an integrator for u and command the motors with it. To the error voltage state you feed in the difference between the estimated voltage from the observer and the integrated u commands.
2.Will the delta-u controller work the same if i will command the motors with u and not the integral of it and instead use the integral voltage error as state? why did you choose this way for the controller and not another form?
You nailed it.
Delta-U won't work if you command the motors with U, since your model doesn't match your plant (off by an integral).
I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand.
If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".
d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u
You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].
We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.
3.Is the delta-u controller in the end a linear combination of position error, velocity error and voltage error?
Yes. It's just another way to add integral into the mix. I like it because if your model is performing correctly, you won't get any integral windup. The trick is that it lets the applied voltage diverge from the voltage that the robot appears to be moving with by observing it in the observer.
4.Why did you use Kalman filter instead of a regular observer? How much better was it in comparison to a regular observer?
It's just another way to tune a state space observer. If you check the math, if you assume fixed gains, the kalman gain converges to a fixed number as time evolves. You can solve for that kalman gain and use it all the time. Which results in the update step you find in an observer.
Honestly, I end up tuning it one way and then looking at the poles directly at the end to see how the tuning affected the results.
5.How did you tune the Q and R matrices in the kalman filter?
The rule of thumb I've been using is to set the diagonal terms to the square of a reasonable error quantity for that term (for Q), and try to guess how much model uncertainty there is. I also like to look at the resulting kalman gain to see how crazy it is, and then also plot the input vs the output of the filter and look at how well it performs during robot moves. I've found that if I look at things from enough angles, I get a better picture of what's going on.
6.How do you tune the parameters that transfrom the motion profile to the feed-forward you can feed to your motors?
I didn't. I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.
The equation to minimize is:
(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there.
The end result was that our model defined the feed-forwards constants, so it was free :) We also were able to gain schedule the feed-forwards terms for free as well.
FYI, this was the first year that we did feed-forwards. Before, we just relied on the controllers compensating. You can see it in some of the moves in the 2015 robot where it'll try to do a horizontal move, but end up with a steady state offset while moving due to the lack of feed-forwards.
7.How did you create 2 dimensional trajectories for your robot during auto?
We cheated. We had a rotational trapezoidal motion profile and a linear trapezoidal motion profile. We just started them at different times/positions, added them to each other, and let them overlay on top of each-other. It was a pain to tune, but worked well enough. We are going to try to implement http://arl.cs.utah.edu/pubs/ACC2014.pdf this summer.
8.How do you sync multiple trajectories in the auto period? for example how did you make the arm of your robot go up after crossing a defense?
Thank you very much! :)
Our auto code was a lot of "kick off A, wait until condition, kick off B, wait until condition, kick off C, ..." So, we'd start a motion profile in the drive, wait until we had moved X far, and then start the motion profile for the arm. The controllers would calculate the profiles as they went, so all Auto actually did was coordinate when to ask what subsystem to go where. With enough motion profiles, and when you make sure they aren't saturated, you end up with a pretty deterministic result.
Awesome questions, keep them coming! I love this stuff.
ranlevinstein
15-06-2016, 10:21
I've recently switched formulations to what we used this and last year, and I think the new formulation is easier to understand.
If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".
d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u
You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].
We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.
Thank you for your fast reply!
Are the A and B matrices here the same as in this pdf?
https://www.chiefdelphi.com/forums/attachment.php?attachmentid=17671&d=1419983380
Yes. It's just another way to add integral into the mix. I like it because if your model is performing correctly, you won't get any integral windup. The trick is that it lets the applied voltage diverge from the voltage that the robot appears to be moving with by observing it in the observer.
I am a bit confused here. I integrated both sides of the equation and got:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error
Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing?
I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.
The equation to minimize is:
(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
This means that you have 3 goals running around. The un-profiled goal, the profiled goal and the R that the feed-forwards is asking you to go to. I'd recommend you read the code to see how we kept track of it all, and I'm happy to answer questions from there.
The end result was that our model defined the feed-forwards constants, so it was free We also were able to gain schedule the feed-forwards terms for free as well.
WOW!
This is really smart!
I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct?
Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward?
Can you explain how did you observe the voltage?
We are going to try to implement http://arl.cs.utah.edu/pubs/ACC2014.pdf this summer.
This looks very interesting, why did you choose this way instead of all the other available methods?
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it.
Who teaches your students all this stuff?
My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice?
Thank you very much!
AustinSchuh
16-06-2016, 02:32
Thank you for your fast reply!
Are the A and B matrices here the same as in this pdf?
https://www.chiefdelphi.com/forums/attachment.php?attachmentid=17671&d=1419983380
For this subsystem, yes. More generally, they may diverge, but that's a very good place to start.
I am a bit confused here. I integrated both sides of the equation and got:
u = constant * integral of position error + constant * integral of velocity error + constant * integral of voltage error
Isn't that a PI control + integral control of the voltage? This controller as far as I know should have integral windup. What am I missing?
It's a little bit more tricky to reason about the controller that way than you think. The voltage error term is really the error between what you are telling the controller it should be commanding, and what it thinks it is commanding. If you feed in a 0 (the steady state value when the system should be stopped, this should change if you have a profile), it will be the difference between the estimated plant u, and 0. This will try to drive the estimated plant u to 0 by commanding voltage. u will also have position and derivative terms. Those terms will decay back to 0 some amount every cycle due to the third term. This lets them act more like traditional PD terms, since they can't integrate forever.
The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that.
You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier.
WOW!
This is really smart!
I want to make sure I got it, Q is weight matrix and you are looking for the u vector to minimize the expression? In which way are you minimizing it? My current Idea is to set the derivative of the expression to zero and solve for u. Is that correct?
Did you get to this expression by claiming that R(n+1) = AR(n) + Bu where u is the correct feed forward?
Bingo. We didn't get the equation done perfectly, so sometimes Kff isn't perfect. It helps to simulate it to make sure it performs perfectly before trying it on a bot.
That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible.
Can you explain how did you observe the voltage?
You can mathematically prove that the observer can observe the voltage as long as you tune it correctly. This is called observability, and can be calculated from some matrix products given A and C. For most controls people, that is enough of an explanation ;)
Intuitively, you can think of the observer estimating where the next sensor reading should be, measuring what it got, and then attributing the error to some amount of error in each state. So, if the position is always reading higher than expected, it will slowly squeeze the error into the voltage error term, where it will finally influence the model to not always read high anymore. You'll then have a pretty good estimate of the voltage required to do what you are currently doing.
This looks very interesting, why did you choose this way instead of all the other available methods?
Also how does your students understand this paper? There are lot of things that needs to be known in order the understand it.
Who teaches your students all this stuff?
My team don't have any control's mentor and we are not sure if to move to model based control or not. Our main problem with it is that there are a lot of things that need to be taught and it's very hard to maintain the knowledge if we don't have any mentor that knows this. Do you have any advice?
Thank you very much!
The tricky part of the math is that robots can't move sideways. This type of system is known as a non-holonomic system. It is an open research topic to do good non-holonomic control since it is nonlinear. This paper was recommended to me by Jared Russell, and the results in the results section is actually really good. This generates provably stable paths that are feasible. A-Star and all the graph based path planning algorithms struggle to generate feasible paths.
We have a small number of students who get really involved in the controls on 971. Some years are better than others, but that's how this type of thing goes. There is a lot of software on a robot that isn't controls. I'm going to see if the paper actually gets good results, and then work to involve students to see if we can fix some of the shortcomings with the model that one of the examples in the paper uses and help make improvements. I think that'll let me simplify the concept somewhat for them and get them playing around with the algorithm. I've yet to try to involve students in something this mathematically challenging, so I'll know more once I've pulled it off... I mostly mention the paper as something fun that you can do with controls in this context.
When I get the proper amount of interest and commitment, I sit down with the student and spend a significant amount of time teaching them how and why state space controllers work. I like to do it by rederiving some of the math to help demystify it, and having them work through examples. I've had students take that knowledge pretty far and do some pretty cool things with it. Teaching someone something this tricky is a lot of work. We tend to have about 1 student every year actually take the time to succeed. Sometimes more, sometimes less.
Doing model based controls without good help can be tricky. I honestly recommend most of the time to focus on writing test cases with simulations with more simple controllers (PID, for example) before you then start looking at model based controls. This gets you ready for what you need to do for more complicated controllers, and if you were to stop there having learned dependency injection and testing, that would already be an enormous success. :) The issue is that most of this stuff is upper division college level material, and is sometimes graduate level material. Take a subsystem on your robot, and try to write a model based controller for it over the off-season.
ranlevinstein
16-06-2016, 04:58
If you have an un-augmented plant dx/dt = Ax + Bu, you can augment it by adding a "voltage error state".
d[x; voltage_error]/dt = [A, B; 0, 0] x + [B; 0] u
You then design the controller to control the original Ax + Bu (u =K * (R - X), design an observer to observe the augmented X, and then use a controller which is really [K, 1].
We switched over last year because it was easier to think about the new controller. In the end, both it and Delta U controllers will do the same thing.
I modeled it as you said and I got that:
acceleration = a * velocity + b * (voltage error) + b * u, where a and b are constants.
I am a bit confused about why this is true because the voltage error is in volts and u is volts/seconds so you are adding numbers with different units.
It's a little bit more tricky to reason about the controller that way than you think. The voltage error term is really the error between what you are telling the controller it should be commanding, and what it thinks it is commanding. If you feed in a 0 (the steady state value when the system should be stopped, this should change if you have a profile), it will be the difference between the estimated plant u, and 0. This will try to drive the estimated plant u to 0 by commanding voltage. u will also have position and derivative terms. Those terms will decay back to 0 some amount every cycle due to the third term. This lets them act more like traditional PD terms, since they can't integrate forever.
The trick is that the integrator is inside the observer, not the controller. The controller may be commanding 0 volts, but if the observer is observing motion where it shouldn't be, it will estimate that voltage is being applied. This means that the third term will start to integrate the commanded voltage to compensate. If the observer is observing the correct applied voltage, it won't do that.
You can shot this in simulation a lot easier than you can reason about it. That's one of the reasons I switched to the new controller formulation with a direct voltage error estimate. I could think about it easier.
I am still having some problems with understanding it, if the system is behaving just like it should then the integral of the voltage error will be zero and then there is just a PI controller. In my mind it makes a lot more sense to have:
u = constant * position error + constant * velocity error + constant * integral of voltage error
Maybe there is a problem with velocity error part here but I still don't understand how there won't be an integral windup when you have integral of position error in your controller.
What am I missing?
Also I saw you are using moment of inertia of what being spun in your model, What units is it and how can I find it?
Bingo. We didn't get the equation done perfectly, so sometimes Kff isn't perfect. It helps to simulate it to make sure it performs perfectly before trying it on a bot.
That is the correct equation, nice! You then want to drive R to be at the profile as fast as possible.
I am having some problems with taking the derivative of the expression when I am leaving all the matrices as parameters. How did you do it? Did you get a parametric solution?
I was wondering about how the delta-u controller works when the u command get's higher than 12 volts, because then you can't control the rate of change of the voltage anymore.
Thank you so much! Your answers helped my team and me a lot!:)
Mike Schreiber
16-06-2016, 11:26
For the intake, we've gotten really good at timing belt reductions, and the single reduction from there would have been required anyways since we needed to power the gearbox from the middle of the shaft anyways. The VP wouldn't have actually made it much simpler.
.....
We've been running timing belt reductions as the first stage since 2013, and have really liked it. They are much quieter, and we don't see wear.
Is there anything special you're doing that I'm not seeing? Looks like math center-to-center distances - no tensioning. Does this reduce lash significantly compared to spur gears in the first stage? Aside from over-sized hex, what else are you doing to remove lash from the system?
Awesome robot - as always.
ranlevinstein
16-06-2016, 15:33
I defined a cost function to minimize the error between the trajectory every cycle and a feed-forward based goal (this made the goal feasabile), and used that to define a Kff.
The equation to minimize is:
(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
I managed to solve for u assuming Q is symmetric and the trajectory is feasible. I got:
u = (B^T *Q*B)^-1 * (r(n+1)^T - r(n)^T * A^T)*Q*B
Is that correct?
Travis Schuh
16-06-2016, 22:15
Is there anything special you're doing that I'm not seeing? Looks like math center-to-center distances - no tensioning. Does this reduce lash significantly compared to spur gears in the first stage? Aside from over-sized hex, what else are you doing to remove lash from the system?
Awesome robot - as always.
We don't do any tensioning on our first stage belts, it is what you have described. I don't think there are huge backlash savings based on having a belt vs gear drive on that stage, because the backlash at that stage is greatly reduced as you go through the reduction and the tooth to tooth backlash is minimal. There is also the added benefit of belts being quieter than gears at these speeds, but that is more of a nice to have.
Most of our backlash reduction comes from eliminating hex backlash. We also do the standard run as large a chain reduction as you can on the last stage, and then keep this chain tensioned well. Going forward we are going to be using #35 whenever we can for these reductions to avoid stiffness issues, which also helps with the controls.
AustinSchuh
16-06-2016, 23:57
I managed to solve for u assuming Q is symmetric and the trajectory is feasible. I got:
u = (B^T *Q*B)^-1 * (r(n+1)^T - r(n)^T * A^T)*Q*B
Is that correct?
def TwoStateFeedForwards(B, Q):
"""Computes the feed forwards constant for a 2 state controller.
This will take the form U = Kff * (R(n + 1) - A * R(n)), where Kff is the
feed-forwards constant. It is important that Kff is *only* computed off
the goal and not the feed back terms.
Args:
B: numpy.Matrix[num_states, num_inputs] The B matrix.
Q: numpy.Matrix[num_states, num_states] The Q (cost) matrix.
Returns:
numpy.Matrix[num_inputs, num_states]
"""
# We want to find the optimal U such that we minimize the tracking cost.
# This means that we want to minimize
# (B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
# TODO(austin): This doesn't take into account the cost of U
return numpy.linalg.inv(B.T * Q * B) * B.T * Q.T
:)
kylestach1678
17-06-2016, 03:48
The equation to minimize is:
(B * U - (R(n+1) - A R(n)))^T * Q * (B * U - (R(n+1) - A R(n)))
This cost function is just the state error part of LQR, correct?
return numpy.linalg.inv(B.T * Q * B) * B.T * Q.T
I noticed that this solution ends up evaluating to the (pseudo)inverse of B when Q is a constant multiple of the identity matrix, which is the solution to R(n+1)=A*R(n)+B*u when u=Kff*(R(n+1)-A*R(n)). What is the reasoning behind using the LQR weighted solution instead of the simpler version?
thatprogrammer
17-06-2016, 20:17
You set your wheel velocity to 640 in your shooter code. I can't figure out what unit of measure this 640 is calculated in; RPM would be too slow while RPS would be too fast. What unit do you use, and is it related to your model based calculation of everything? (Do you have any tips for starting to learn how to run everything using models?)
kylestach1678
17-06-2016, 20:49
You set your wheel velocity to 640 in your shooter code. I can't figure out what unit of measure this 640 is calculated in; RPM would be too slow while RPS would be too fast. What unit do you use, and is it related to your model based calculation of everything? (Do you have any tips for starting to learn how to run everything using models?)
Radians per second, I would assume. Everything in standard units :D. Using radians makes the derivation of the models simpler, especially when the rotation eventually gets transformed into linear motion, and it is nice to be consistent across the board.
AustinSchuh
27-06-2016, 22:56
I modeled it as you said and I got that:
acceleration = a * velocity + b * (voltage error) + b * u, where a and b are constants.
I am a bit confused about why this is true because the voltage error is in volts and u is volts/seconds so you are adding numbers with different units.
Nice catch. Maybe I wasn't clear, but U changed units from volts/sec to volts, and the integrator on the output of the plant disappeared.
I am still having some problems with understanding it, if the system is behaving just like it should then the integral of the voltage error will be zero and then there is just a PI controller. In my mind it makes a lot more sense to have:
u = constant * position error + constant * velocity error + constant * integral of voltage error
Maybe there is a problem with velocity error part here but I still don't understand how there won't be an integral windup when you have integral of position error in your controller.
What am I missing?
I *think* you are off by an integrator again.
u = Kp * x + Kv * v + voltage_error
So, if torque_error = 0 (the model is behaving as expected), then you don't add anything. Ask again if I read too fast.
Also I saw you are using moment of inertia of what being spun in your model, What units is it and how can I find it?
kg * m^2
I was wondering about how the delta-u controller works when the u command get's higher than 12 volts, because then you can't control the rate of change of the voltage anymore.
Thank you so much! Your answers helped my team and me a lot!:)
You have the same problem with a normal controller. The nonlinear assumption breaks down there too. We just cap the accumulator to +- 12.
To solve that all correctly, you'll want to use a Model Predictive Controller. They are able to actually take into account saturation correctly. Unfortunately, they aren't easy to work with. We haven't deployed one yet to our robot. (Go read up on them a bit. They are super cool :) That was either one of, or my favorite class at college.)
It's been another busy week. Sorry for taking so long. I started a reply a week ago and couldn't find time to finish it.
AustinSchuh
27-06-2016, 23:00
This cost function is just the state error part of LQR, correct?
Yes. Since it doesn't have the U part of the LQR controller, it isn't provably stable, and I've seen ones which aren't...
I noticed that this solution ends up evaluating to the (pseudo)inverse of B when Q is a constant multiple of the identity matrix, which is the solution to R(n+1)=A*R(n)+B*u when u=Kff*(R(n+1)-A*R(n)). What is the reasoning behind using the LQR weighted solution instead of the simpler version?
Good catch. It worked, so we stopped? ;) I'd like to revisit that math this summer. It doesn't always return an answer which converges, which suggests to me that something is wrong with it.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.