Autonomy: How Did You Guys Do It?

From the looks of it, a lot of teams did the whole “drive forward for x seconds/feet and score” method. Others used line trackers, some might have used cameras.

How did you guys do it?

I’ve got a range finder on the front of the robot. I drive until I’m about 3ft away from the wall. Then switch over to the camera, and adjust to center on the peg.

Arm is only on timers… Hoping to add encoders soon.

Used the E4P quadrature encoders provided in the KoP. We had the robot driven the correct distance for different modes, and I plugged them in for drive distance. Had some other encoder counts noted to do some calculations without trial and error. Arm height at our starting position angle needed to go through the peg was noted. Program ended up drive robot until encoder count got close to target. When robot stops moving and the code that moves the robot mechanisms to position returns a boolean of complete another stage is triggered that lowers the arm and shoots the tube out with the rollers.

Had to do some trial and error with drive speeds at competition so lift gets out before the robot goes up to the peg.

Now, I was going to use the rangefinder to find weather we are on the fork or the stop. I had the worry that due to the metal backing, it would give a wrong reading.

Actually they are awesome. Good to almost an inch. For me the drive was geared too high so we coasted a lot, but I came up with a fix to that (drop the strafe wheel and do an endo :slight_smile: )

you need to do a bit of filtering, but not a big deal.

Our autonomous uses the encoders on the SuperShifter transmissions. The software uses the encoder values to actively make the robot drive dead straight. We also know reliably how far the robot has driven from the encoders. Encoders on the arm allow reliable positioning of the arm. The claw motors let us adjust the attitude of the tube when we reach the peg and then eject it onto the target peg. We have six different programs, one for each peg height. At GSR, we only ever ran the two for the top pegs, but have the others just in case. We can also select a delay before we start, which proved very useful in eliminations at GSR when we were on an alliance with 175 and 176 which scored 3 ubertubes in many of our elimination matches. See for a match (semi-final 2) where all 3 of our alliance and 2 of our opponents scored in autonomous. Our robot is the middle blue robot which does not move until after the 4s delay we selected.


Encoders on our drive train, encoders on our elevator, camera tracking, gyro angle correction, and sonar sensor. Consistently scores the ubertop on the top peg every time.

We decided that dead reckoning would be enough for this year given that the autonomous period is not very dynamic. Although our encoder code is done, we have had hardware problems and they are not on yet. Thankfully, our robot it pretty balanced and drives very straight. We didn’t get an auto-mode in until yesterday in WI and it only scored once (in a somewhat humorous fashion), but will have one that should score pretty consistently week 4 in Midwest.

Given the hardware issues are fixed, the timing to approach will be changed to distance and a gyro would be added for turning around and getting ready. I don’t think our robot functions act quickly enough to attempt a double-tube in 15 seconds.

We used the camera 100%. I have a feeling previous years vision targets have given the camera a bad name. The retro-reflective tape used this year is simply awesome to track and is the only target in my history that is not affected by the stage lighting used at the competitions. I hope that stuff is around for a long time to come.

Yup. Our autonomous works in normal lighting, pitch black, or intense lights. We love that stuff.

We used Line Sensors, a sonar, a Gyro, and a backup way to make the auto move. Of course the Line Sensors gave us a hassle that we had to disable them and trust the sonar. it was great!!!

Our Auton code drives straight for x seconds using gyro angle correction; rotates our arm using x distance using an encoder value; rotates the tube pitch; rolls out the tube onto the peg; and backs up.

we’re working on putting up two ubertubes.

We’re using nothing but line sensors, and its working excellently for us (backing up when it hits the horizontal piece of tape)!

We’re using a line track and ultrasonic range finder combo, with a gyro to keep us straight. In hindsight, the line track isn’t really that useful.

Our team was going to have encoders on our drive wheels, but we decided they wouldn’t be that helpful for the most part. As a result, our automous is drive forward x seconds, move arm down for Y seconds, and retreat. we are using a makeshift encoder for the lift, so we will be able to make it go to the right height. Whether or not this code works will be determined on Thursday.

Playback of recorded data from one of ten driver-selected .xls files stored on the cRIO. The file is selected during Disabled mode using buttons on one of the driver joysticks. The data is recorded during early practice matches to get a “gross” autonomous play accomplished, and then edited via MS Excel for any fine-tuning needed over the next few days. The recorded data includes joystick positions, button status, and pSoc switch status. It is written to a lossless queue (FIFO) at 20ms intervals (the same rate at which it arrives from the Driver Station) and eventually written to a file in one fell swoop once the driver decides to stop recording data.

For matches, the desired autonomous play is selected by the driver. Playback data from the corresponding .xls file is then loaded into several arrays (all while in Disabled mode). At the start of Autonomous, the data is “played” out from those arrays into the appropriate subsystem VIs (drive, arm, etc.) at the same rate at which is was collected. Of course, there is some variability in the response of the mechanical system from match to match but it is minimal. I guess you could call this a reasonably sophisticated open loop system.

Each link in our two-jointed tube-handling arm is controlled via independent PID loops. Pre-identified angular positions of each joint are contained in a lookup table of positions (home, feeding station, floor pickup, top center, etc.) stored in a .ini file on the cRIO similarly to the autonomous play files. A state machine monitors the status of buttons on the Driver Station and loads the angular positions into the PID setpoints when a state change occurs. The same state machine runs in both Autonomous as well as Teleop since the arm control VI doesn’t care if the inputs are coming from a human being or from recorded data.

We had a pretty good success rate hanging ubertubes on the top peg at Pittsburgh using this system. The main problem we had was in getting our claw clear of the tube before backing away from the rack; we sometimes de-scored ourselves. There are some drawbacks to using the approach described, but as they say in math classes - the proof is left to the reader.

Our autonomous uses one sonar on the left and one sonar on the front. Paired with our omni directional drive, the lane divider and driver station wall allow us to know exactly where we are. We also have a gyro to keep us orientated correctly, we were hoping to use two range finders up front to orient our selves but we never got that working fully. We are hoping to get a two tube auto working Thursday at Lone Star.

Our autonomous followed the line til it sensed the tee and then gunned it for the last loop of the while loop and stopped, which put us right at the wall. The “gunning” worked well except it should have been a little less energetic. Since we had holomonic drive we just controlled via strafing to keep on the line thus remaining normal to the wall. We started raising the arm to the limits at the start so that it was in position when we hit the wall. We scored 8/10 on the top middle peg in qualification runs and 100% in elimination except for one case where the arm was intentionally disabled. We could do the Y but almost all the alliances allowed us to go straight. Too bad autonomous did not offer more points.

This rookie programmer wrote autonomous mode on the last day at 9 PM :slight_smile:

Our arm’s highest extension is at roughly the height of the highest peg, and when we release the tube we don’t have to back up in order to score, which made things easier.

We navigate to the scoring grid using line sensors. I didn’t program for the Y because I figured 1) we should have our pick of starting positions, we can just choose the straight path and 2) it would distract us from getting the easier stuff working.
Currently we are just running the elevator motor up all the time during autonomous, due to a weird problem with our limit switches in autonomous [speaking of which, would anybody like to Find the Bug that’s making our limit switches work in teleop but not autonomous? :smiley: I used C++.]. However, we recently switched our elevator motor to a more powerful one, which actually caused a belt to shear when it was run too far…alarmed, I am now planning some careful testing and playing with the motor speed before we go out to a match and shear another belt. The idea is that if the elevator reaches the top just as we reach the scoring grid, we won’t damage anything.

We use the tape T to trigger scoring, which involves stopping the elevator motor and opening the gripper. We can drop the tube at just about any height and still score.

All very rudimentary and I am particularly irked that the limit switches are still proving noncompliant. We didn’t get very much [read: any] testing in of the whole system: at roughly 9 PM on ship night we got the line following stuff working, and then I successfully wrote and tested a switch statement to ensure that we don’t just drive right over the T before it has time to stop us. So theoretically it should all work: I am anxiously looking forward to our Week 4 regional to try it all out :smiley:

I’m currently trying to distract myself from the fact that the next regionall for us is in like 2 and a half weeks so if youd like me to look over your code, id be happy to help debug it, granted i understand the language you send me ::rtm::
and that sounds pretty awesome. im still working on getting time to work on our teams autonomous :frowning: