Key to a successful Complex Auto

After watching our programmer slave away for hours over a 2 cube autonomous mode and never really getting to use it in competition, I noted that I wasn’t aware of just how many intricacies there were to being successful in this area.
Our team is pretty new to the complex autonomous mode thing.
For the first time ever, our robot had working X,Y Gyro stabilization, encoders to track travel distance and arm position, as well as sensors to tell when we had a cube in its intake.
Even with all of these more advanced abilities(for our team anyways), we still had a dang hard time getting something working properly.

More specifically, even when we were really close to having something working, we’d run the program once and the robot would do about what we wanted, we’d run the program again and there would be a much higher level of error causing the bot to be off course.
I’ve seen some pretty impressive auto’s this season that look incredibly accurate, which is why I’m writing this post.

For those that have experience and success with creating more advanced routines, what are the sorts of things you take into account when building and coding a robot for an advanced auto?
How do you prevent things like chassis drift, error in sensors, and other minor amounts of error that add up over the 15 seconds?

1 Like

To succeed you must first understand failure.

What are the limits of your sensors?
What happens when parts break?
What happens if things happen more slowly or more quickly than expected, or with less accuracy?

I think the first step is perfecting a set of auton building blocks:

  1. Go straight for a specified distance. Get this working using PID first, then add motion profile, then add the gyro to go perfectly straight along a vector.

  2. Turn to a specified angle. Get this working using PID first, then add motion profile.

  3. Test and validate all the other subsystem commands like set elevator position, intake on for a specified time, etc. Then add motion profile to arm/elevator movement to make them even more precise and smooth.

  4. Add sensors whenever steps are not reliable. Limit switches, IR sensors, etc to detect when a cube is present. Tweak the commands in step 3) for improvement.

  5. Start stitching together the steps above to build the first iteration of your auton routines. Add commands in parallel when possible to save time, e.g. raise elevator.

  6. Bump into or follow walls to square up after long sequences of commands.

  7. Don’t reset your gyro for the entire 15 seconds. Update all your turns and go straight along a ray commands to work using an absolute gyro angle initialized at the start of auton. This cuts down on lots of little errors when several movements are required.

  8. Start making things smoother and faster… follow curved/spline paths.

A few keys:

It starts with rock solid drivetrain sensors. Your distance-traveled and robot angle need to measure very accurately. This isn’t just about expensive sensors - it’s about robust wiring, proper placement, and a tight mechanical system with as few variable losses as possible.

Once you have this, build a well-tuned drivetrain movement control algorithm. Rotate-x-degrees, PID, path planner, whatever you want. Regardless, you need to be able to drive and place the robot with inch- or few degree- precision. Use as few “fudge factors” as possible - get it so when you punch “5 feet” into a variable, you actually go forward by 5 ft, +/-1". Every time. The field will already have tolerance to the placement of parts, the robot won’t start in the exact same spot every time… you don’t want software adding to the tolerance stackup.

Then, build a good architecture. Command based, some custom setup, something. Make it so you write your auto in many reusable pieces. Tune each piece separately to work really well, then combine them together like legos. Ideally, you are rarely, if ever, copy-pasting code exactly.

Another tip - start running the auto routine slow. Make sure you can consistently accomplish it slowly. Like, run the exact same routine at least 5 times, and make sure it hits 100%. Only then, start bumping the speed up slowly. See which things start to go wrong. Does your arm not go up fast enough, and invalidate some timing? Do your drivetrain PID’s start to overshoot? Starting slow allows you to ensure the fundamental action is working first, then tackle speed-related problems one-by-one.

This year being our first with pretty decent autos, it shows we have learned alot from our previous 2. Most importantly, drivetrain encoders MUST WORK! In 2016 we used MAG encoders that we had horrible luck with, but I know others are able to use them just fine so I wont diss them, but last year we used USDigital E4T encoders that come with AM gearboxes, and only until AFTER the season did we realize the dozens of hours we spent on autos were wasted because the encoders werent shielded and would get misaligned.

We also moved to using Jaci’s pathfinder, and boy is it amazing. We tried making our own position tracking libraries but realized that there is too much accumulated error with that, so we have been making many modifications to Jaci’s library to improve it as we see we need more features.

Another important part is making sure just the entire robot works, in 2017 we had lots of mechanical issues which took away lots of auto time. This year the practice bot is up and running almost 24/7 with autos and maybe a couple hours have been some fixes when it breaks, but we have been able to put in over a hundred hours this year no problem.

Speaking from experience:
1.) Start small and grow it. If you start with something like “drive in a straight line to a setpoint” you will potentially have half of your driving auto done.
1a.) Test this thoroughly and under different circumstances once it has been made robust enough.
2.) Don’t be afraid to make changes (but be sure to have a backup of the code easily accessible!). We are going into our third competition at worlds and our auto will be going through its third full change in production code. We did this because we could not get enough time with a functional bot (*something, something chain in tube something, something bad idea *) to put 1a into practice, but it has been a great learning experience.
3.) Practice field time is valuable. As you transition from a practice bot to your competition bot things will undoubtedly change that you didn’t account for. If you can, definitely get the bot out there.
4.) PID and motion profiling isn’t everything. Sometimes simple is better. A logarithmic curve has taken much less time to tune than a PID in my experience.
5.) Modularize. This is more good programming in general than autonomous specific. Modularizing code helps to eliminate errors and cut down on total development time.

I know this isn’t exactly what you asked for but I hope some of it helps. Some of the code is still messy as we are in the middle of refactoring bits and gearing up for worlds but our GitHub is public if you want to take a look at what we have done.

The key to a successful auto is to design the robot such that auto is as simple as possible.

Drivetrains that don’t slip and turn repeatably, intakes that are “touch-it, own-it”, and scoring mechanisms that use simple sensors and well-encapsulated automation are common features you will notice on most robots that are regularly pulling off complex auto modes this year.

A couple of thoughts on coaching young programmers:

  1. A routine is reliable only when each individual step is reliable. Resist the temptation to skip ahead and write/tune your orchestrated steps too early.

  2. Many of our bugs is caused by our minds’ limitation in remembering the many different units our code is based on (rpm, rotation/sec, velocity, acceleration, ms, radians, frequency, etc, etc.)… very quickly, we’d forget which constants are in what unit of measure. Make them unmistakable using variable names like targetSpeedInMs-1. Or at least comment them diligently.

Adding on to this, one of the biggest advantages of having your code be modular is it’s reusability. Most of the advanced software teams can iterate various autons so quickly is that the individual steps are already done. Now it’s just a matter of linking them in right order. Figure out what are base actions every auton requires. For example these are drive to distance, turn to angle, etc.

From the perspective of a non-programmer, the code needs to be tolerant of changing conditions in the real, physical world. Test your code in an environment as close to the competition field as possible, but be aware that physical things will be different on the real field, and the mechanical condition of your robot will change over time.

Example 1: We don’t have a real practice field, so we have to test our auto code in a variety of different settings, on different types of carpet. Drive train behavior can be very different on different types of flooring, or even on new vs worn carpet. You may have your auto tuned and working perfectly at home. When you place it on brand new field carpet, don’t be surprised it if behaves differently.

Example 2: Mechanical systems wear over time. It is not uncommon for us to notice that as the robot wears, the autonomous routines need tweaking.

Example 3: The fields are not perfect. Some autonomous routines work perfectly on the red side of the field, but don’t work on the blue side. Auto routines that worked perfectly at one competition don’t work at the next one. We often spend all the qualification matches tuning auto routines, hoping the bugs will be worked out so they work right for elims.

To my non-programmer eyes, it seems that using sensors which provide feedback of where the robot is in relationship to the field elements would be way more effective than using drive train encoders and gyros to follow a preselected path. Our programming team seems to think that this is hard though… :slight_smile:

This is a little bit of a tangent, but I really wish the fields were more consistent- there’s no reason that the switch should be nearly an inch offset in some arbitrary direction…

That being said, having a robot which can mechanically tolerate inconsistencies helps with autos. Teams with wide intakes capable of pulling in a second cube from any angle, and from a variety of approaches, will likely have more luck writing a consistent autonomous.

The rules always state tolerances, normally +/- 1" for field dimensions. Always understand the bounds of what you have to work with.

But to follow on from that, my advice to our programmers is always that if we have to be that accurate (or should I say, intolerant), then we are going to have trouble. Obviously, if you need to be touching a wall to do some action (like put a cube in a switch), then yes, you need to be touching the wall. But if we need to be 1" or less sideways to be exact with our placement, then we will most likely fail, due to inconsistencies (between robots, between fields etc.).

So really, be accurate, but not require it to be 100% accurate for your only chance of success.

Take-away: store your PID constants and encoder calibration constants somewhere, like a Constants class, so you can easily access and change them in one place without digging through your code. When you get to the venue, try your auto routines in practice matches or on the practice field, and tweak accordingly.

You may also be able to tune your autos to some carpet at home that’s “close enough” to the real thing. Setting up auto routines that don’t rely on getting exactly to the right place will help with robustness in many ways, this one being one of them.

Take-away: again, having system constants declared in one place can help with this. Try to define your autonomous commands so that they don’t rely on specific values, or so they rely on sensor feedback to determine that it worked. “Run the intake for 3 seconds” might work for you now, but when the wheels (and Power Cubes) are worn down and dirty, maybe that won’t work so well anymore. Better to have logic that says “run the intake until the sensor detects a cube is acquired”.

Again, having constants can help with this. Your auto code can read the FMS values and determine if you’re on the red or blue side. You could theoretically have it look up separate values to use for each side. Or, use a sensor if you can. “Drive 5.5 feet” should get you to the switch, but you could also say “Drive 6 feet or until the sensor says we’re there” and that should give you a bit of leeway.

My only caution is that every sensor you rely on is a physical part that can disconnect, degrade, stop responding, get disconnected, … The more complex you make your auto, the more you’ll pull your hair out when it doesn’t work and you’re in the pit trying to figure out why. Last year for Steamworks we tried to get really smart with our sensors, and we had auto codes that were doing curved path following using encoders and gyro, then vision to find the peg target, then reading ultrasonics until the robot got to the wall, then waiting for an IR sensor to indicate the gear had been dropped on the peg. If all of those sensors weren’t working perfectly, the robot didn’t drop off the gear. We had a frustrating time at our first event trying to diagnose why things didn’t work. Eventually we turned off most of those sensors. The ultrasonics had flaky connectors, the IRs were sensitive to ambient spotlights, even our encoders started to slip off.

So use sensors, yes, but have code that does fallback plans (if I don’t hear from my distance sensor, use the wheel encoders, and if I don’t hear from the wheel encoders, go by time) and check that your sensors are wired correctly, and test them often.

Also, anticipate failure cases and what you can do about them. Your IR sensor might work perfectly on your plywood mockup field element at home, but the real field element is made of Lexan which it sees right through. Your ultrasonics worked fine at home but get damaged by rough contact robots and field elements. Your camera vision system can get fooled by the giant spotlights shining down on the field. Your encoders can get hooked on field elements without some kind of skid plate to protect them. These are all issues we’ve had to react to - some we were able to anticipate in advance and plan around, others, not so much…

From my experience, while it is important to be able to overcome irregularities through your auto code, focusing on increasing the mechanical and electrical consistency of your robot will greatly reduce the ease in tuning your auto mode. Don’t get me wrong, it’s important to plan for the worst case, but assuming it’s always a ‘code problem’ can be far less effective than addressing the underlying issue. Your code should be able to handle a variety of situations, but that should not in any way lower the importance of electrical and mechanical robustness. There are simply so many variables in a complex auto mode that to achieve success, a robust robot is needed in conjunction with robust code design.

A good auto (a good robot, for that matter) is an interplay between the hardware and the software. Make sure your team is planning for this and communicating from day 1. In our first year, our Stronghold robot was designed to fit under the low bar, so we had to package everything down tight. The autonomous team wanted to use a camera to track the tower. The two groups didn’t coordinate early on which led to a surprise later (“what do you mean, you need space for a camera??” We made it work, but it wasn’t as ideal as it might have been)

Clever software can often overcome hardware challenges. When our Stronghold robot took so much of a beating that its frame was bent and it was no longer driving straight, our auto team wrote a new “Drive Straight” command that used the gyro heading to apply drivetrain corrections.

Clever hardware can make the software easier. A “touch it own it” intake that will grab the game piece if all you need to do is drive vaguely toward it, will work much more successfully than a finicky intake that must be aimed just right to work.

The ZED and TX2 setup has opened up huge doors for us this year after some offseason development. We went to two early regionals, so we did not get to show much off, but we are excited to see how far we can push it for Houston Champs. I can’t in good faith recommend it though, our programmers tell me it’s not too easy.

Like we learned from our quarterfinals match a few weeks ago make sure you check your auto before you run it so it doesn’t run into the scale…

On a serious note the off-season is definitely the time to work on autonomous. Going into the season with motion profiling, vision, and any other back end things you want for auto already done so it’s just a bunch of puzzle pieces to put together has worked well for us in all but 1 match.

I heard, can’t remember where, that the Poofs were running a 2+4 with two traction wheels in the middle and 4 omnis. If so, how did that work out? I imagine path following was easier, but was normal matchplay (read: pushing matches and such) any harder or different?

Unrelated question, how do you guys find the right spot to place the robot before each match? For a complex auto to work, I’d imagine it needs a consistent starting position.

Sensors, PID loops, programming, sensors, gyros, etc. aside… learn from our mistake.

Build a friggen drive train that drives absolutely straight forward every $@#$@#$@#$@# time with absolute consistency. Then test it again. Then check to make certain all bolts are properly tightened - everywhere.

Start there. I know that is where we will start in 2019.

The key to 1678’s auto mode success has been to develop a skull that is tougher and more resilient than the wall upon which we bang our heads.

Hundreds of auto tests, some seasons get close to a thousand auto tests. Our 2016 auto teaser video was maybe attempt 50 at best and likely closer to 75+, and it was done without vision to start. You can see the clock in the video, it’s not PM.

1 Like