|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#76
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
We hook up the index pulse on the encoder so we get a very accurate pulse once per revolution. That pulse triggers DMA to capture the encoder value at that point in time. This lets us figure out the encoder value very accurately. Unfortunately, there will be something like 30 of these pulses through the range of motion on the arm. A slow moving filter estimates which one of these 30 pulses we saw, and uses that to zero within 1 encoder tick. We started doing this in 2015, and haven't looked back. This takes all the hard work out of zeroing joints with hall effect sensors on our robot. We wrote the class to support this in 2015, and have been steadily improving it and adding features. All we need to do is to have the software exercise the joint until it passes an index pulse, or have a human move it past an index pulse while disabled, and we are then fully calibrated. |
|
#77
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
|
|
#78
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Quote:
If that didn't turn out to be correct, we were ready to fix it in software. We would have current limited each control loop by looking at the velocity of the motor, backing out the BEMF voltage, and using that to limit the current. I think that was on the original plan, but wasn't an issue and got forgotten about. 4 CIMs in the drivetrain helps a lot as well. Quote:
https://docs.google.com/spreadsheets...it?usp=sharing We look at time to make a characteristic move (assuming that the motor is working against gravity but at steady state), and the holding voltage/power required to hold the arm at that set point. VP released awesome charts about motor life at various holding voltages this year. We targeted a peak of 4 volts holding voltage on all our joints. I'm thinking next year we should drop that down to closer to 3 volts since we had to add fans to the shoulder motor, and replace it a couple times. We like to target ~1/2 second max for motions. Too much longer and you end up waiting on the robot too much. We started out by putting 1 775 pro on each joint and looking at what that was going to mean in terms of power dissipation and speed. The analysis showed that everything was fast enough that way, and we didn't have to look any further. Over the past couple years (really starting in 2014), we've been working on iterating our designs to remove common and known failure modes and weaknesses. We've focused on trying to not burn out motors, rate belts, chains and gears for the required load, and put weight into places where we see consistent failures. Our goal is to go through a season where we perform to our fullest potential without failure on the field. One fascinating thing I learned this year is that for some subsystems, you should gain schedule your controller based on whether you are sourcing power from your motor and using that to drive your load, or pulling power out of your load with your motor. This flips the efficiency. If you assume that the efficiency reduces the torque of your motor by ~5% per reduction and hard-code that into your model, it essentially means that when the motor decelerates the load, the torque is reduced. The physics contradicts this. When you decelerate the load, the load is putting power into your motor, and the gearbox has losses during that transaction. The result is that accelerations reduce effective motor torque, and decelerations actually increase effective motor torque. I bring this up because we had a lot of trouble tuning the arm controller. The only way we could get smooth behavior when both lifting and lowering the arm was to design an "accelerating" controller and a "decelerating" controller and switch between the two depending on whether we were accelerating or decelerating. It was really cool to finally figure that out. I've seen this for probably close to a decade (I remember struggling in 2005 to tune a controller to go up and down nicely), but hadn't ever gotten fully to the bottom of the issue and had a good physics explanation for what was wrong. I'm not sure our switching logic is right, though it was better than no switching. This summer, I want to have someone analyze how we switch between the two controllers and make sure the transition is continuous. I've got summer project ideas to keep the students and myself busy all summer ![]() (yea, yea, we are working on releasing our code. We just finished the last code reviews, and are working on hosting it correctly. Code quality is important!) |
|
#79
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
![]() We use a potentiometer and index pulse to zero each joint. We do not have any limit switches, and we've been known to not put in hard stops. In 2014, one of the hard stops was the cRIO... Let me try to write out an example with a pretend elevator. The encoder moves 0.1 meters per revolution. Someone went and calibrated the elevator and told you that at 0.0971 meters, they found an index pulse. This means that as you lift and lower the elevator, you will see index pulses at 0.0971 meters, 0.1971 meters, 0.2971 meters, 0.3971 meters, 0.4971 meters (I think you see the pattern). They also calibrate the potentiometer so that it reads out the approximate height. Also, pretend that it has like 0.02 meters of noise in the reading. So, if you are at 0.1 meters, you might see readings of 0.09, 0.1, 0.11, 0.12, 0.08. Welcome to real life. It sucks at times. So, we initiate a homing procedure by telling the elevator to move 0.2 meters towards the center of the range of travel. The procedure needs to be designed to not break your robot, but move at least 0.1 meters to find an index pulse. As we are moving, we see a pulse. We then go immediately look at the pot, and it reads 0.3100 meters. The closes index pulse is 0.2971 meters, so we now know that whatever the encoder value was at the index pulse, it really should have read 0.2971 meters. So, compute that offset, and you are homed! DMA is a really cool feature on the FPGA of the roboRIO where you can set up a trigger and cause sensors to be captured. We have configured it to trigger when an index pulse rises, and save the encoders, digital inputs and analog inputs. The FPGA does this within 25 nanoseconds. This lets us record the encoder and pot value at the index pulse. The fun part comes when the noise on your potentiometer is ~0.05 meters. We see this on our subsystems. If you get unlucky, you might pick the wrong index pulse, and be off by 0.1 meters (!). We can fix this by filtering. The encoder should read what the pot reads, with an offset depending on where the system booted. You can take (pot - encoder) as the "offset" quantity and average that over a long period (2 seconds is what we use). Add that filtered value back to the current encoder value, and, assuming Gaussian noise and all that jazz, you will have removed enough noise to make everything work again. More concretely, say we are sitting at 0.05 meters. We get the following encoder, pot readings. Encoder, pot 0.0, 0.0 0.0, 0.1, 0.0, 0.06, 0.0, 0.02, 0.0, 0.08 (I'm a horrible random number generator, FYI) From averaging it all together, it looks like the pot started out at a bit above 0.05, and the encoder is at 0. Then, we get the following measurement in: encoder, 0.05, pot, 0.16. Before, we would say that this is closer to the 0.1971 value, so we would round there. Now, we would say that since the encoder moved by 0.05, and we think the pot was around 0.05, we are likely at about 0.1. The nearest pulse is 0.0971, so that's the zero we actually saw. |
|
#80
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
Do you do the calibration during disabledPeriodic (drive team cycles the arm), upon first motion, other? Is it done just once, at specified intervals, or continuously? |
|
#81
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
I must have been a dumb 5 year old.
![]() |
|
#82
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
We've got our own pub-sub framework, so it's hard to map to WPILib concepts. The "human zeroing", where the robot is moved through the range of motion by hand, happens while disabled. The automated zeroing (robot has a sequence of actions that find index pulses) happens if the robot enters enabled mode but is not all zeroed. We rarely use the automated zeroing, but if the robot reboots without warning or someone forgets to zero at startup, we can continue the match.
|
|
#83
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Your CAD model is amazing!
A couple questions: How many students / mentors are doing CAD? At what point in build to you start fabrication? Are your students learning CAD outside of robotics? |
|
#84
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
http://frc971.org/content/2016-software |
|
#85
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
We orignially switched to the vex EDR mecanums because we needed the weight for the hanger, it ended up being somewhere around a pound or so lighter. The EDRs were a bit grippier; I didn't notice much of a difference in intaking performance between SVR and Champs, we didn't change our intake speeds. We also switched the drive wheels for weight purposes. We had a few issues with the drive wheels rubbing on our bellypan due to dents in the aluminum from the defenses, but we were also having some of those same problems with the first drive wheels. |
|
#86
|
|||
|
|||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
We send out our drivebase for sponsor manufacturing around week 2 and usually get a 2 week turn around on those parts. The rest of the in-house stuff we start after the completion of superstructure CAD, which is ideally the beginning of week 3, but often gets pushed back... Some of our students choose to participate in our summer third robot project, where a lot of the focus is on developing CAD skills (not sure if that counts as outside robotics, but it isn't during the season), other students have taken intern positions where they use their CAD skills, some are enrolled in our school's engineering class where they learn basic CAD, and some do projects on their own for fun or to specifically learn SolidWorks. |
|
#87
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
I had some questions about your closed-loop driving as it seems very interesting!
Sorry if any of the questions are strange or ignorant, I'm not super familiar with C++ and some of the more advanced syntax and features of it. Following your code has been somewhat difficult because of that. (That said, it is written very well and is pretty well commented.) |
|
#88
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
You stated earlier that you get 1/16" backlash at the end of the long arm. My calculations put that at around 0.1 degrees of backlash at your shaft. How are you able to achieve such stellar precision using only Vex gears and chain?
|
|
#89
|
||||
|
||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
I have heard a few whispers that you run custom sized shaft in order to help achieve such precision. Is this true?
|
|
#90
|
|||||
|
|||||
|
Re: FRC971 Spartan Robotics 2016 Release Video
Quote:
SSDrivetrain is used for more traditional driving (go 1 meter forwards). It has motion profiles on the inputs, which can be disabled if you want. This lets us send it a goal of +1 meter, and go work on other things while it does it. Or, it lets us feed the goal in directly when we are doing vision alignment and want the controller dynamics without any sort of profile. PolyDrivetrain runs the teleop driving. It is mostly using feed forwards, but has a proportional loop to do small corrections. It understands the robot's physics, and uses that knowledge to do constant radius turns. The combination of the feed forwards and the feed back makes the driving experience pretty connected. Both of these are fed by a 7 state Kalman Filter which estimates the positions, velocities, and disturbance forces of each side of the drivetrain. Controls can be split into 2 worlds, the estimation and control worlds. You need good sensors or algorithms to figure out what your system is doing, and then you can apply the algorithms. Once you've split the world this way, you can take a nice estimator, and use it to feed multiple controllers. Generally speaking, the controller ends up being a matrix to multiply into the error to get an output, which is stateless. Quote:
Quote:
We have 4 robots (assuming I can count...) all driving with the same code. There is a configuration structure passed into the drivetrain controller class for each robot which contains the physics model for each robot and other configuration bits. This means that the year specific code is 100 lines, all of it boilerplate. Take a look in //y2016/control_loops/python/ for the models of each of our subsystems. We have our own framework for designing robot code. Our code is broken up into somewhere around 10 processes, each responsible for one part of the robot. 1 process is Autonomous mode, 1 process is the joystick code, 1 process is the hardware interface, 1 process is the drivetrain, 1 process is the shooter, 1 process is the vision udp listener, 1 process is the superstructure code, etc. Each of those processes communicate with our own PubSub message passing code. That means, for example, that the drivetrain will listen for Goal and Position messages, and publish Output and Status messages. (look in //frc971/control_loops/drivetrain:drivetrain.q for the actual definitions). This lets us be resilient to crashes, and keeps the interfaces between modules very well defined. For example, with 0 changes outside the hardware interface layer, we switched from running all our code on a BBB in 2014 to it all running on the roboRIO. We also are able to generate simulated Position messages and listen for simulated Output messages in our unit tests so that we can exercise the code without risking the real robot. Quote:
We like the flexibility of re-calculating every time we want to move. To do this, we need to advance the requested position each cycle of the control loop. The only way to guarantee that is to do that in the control loop. (Last year, for our drivetrain, we had another process which was calculating the profiles. The added coordination overhead of interacting with that process was enough that we decided to push the profile down into the controller process.) This also means that the joystick code really just sends out goals when buttons are hit, and lets the underlying code do all the rest. //y2016/joystick_reader.cc has all the joystick code, and it's pretty easy to read. The motion profiles are cheap to calculate. The 7x7 matrix multiplies do add up though ![]() It turns out a large chunk of CPU on our robot goes into context switches between the various tasks. Our drivetrain code uses like 6% CPU, and our shooter uses like 3%. Most of the CPU on our robot either goes into the logger (20% ish), or into WPILib (50%). Quote:
If you can get a linux box set up, do try to run the tests. It's pretty cool to be able to exercise our collision detection code, zeroing code, and other test cases. (bazel test //y2016/control_loops/...) We are huge believers of code review and test driven development. I don't think we could do what we do without both of those, and it helps us involve students at all skill levels in the process while maintaining the quality and reliability we require of our code. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|