Ideas on making auton consistent?

Hey, so, from what I’ve seen from the past two tournaments, whenever I’m on the practice field, I’ve noticed that many, and indeed, most teams were there trying to calibrate their autonomous mode. The fact of the matter is, the field is often inconsistent, PID values aren’t always perfect, and your drive is going to coast whether you like it or not. Even more frustrating, the field is often asymmetrical, and the wall is not always completely straight in relation to the peg. As such, I want to create a thread in which we discuss ideas on how to help autonomous modes work more consistently (especially without using vision, as many teams are not capable of using it yet).

I’ll start off with a couple of things our team did.

Silicon Valley Regional, I’m fairly certain, had a slightly asymmetric field. Our side auton would work, but only on the red-alliance boiler-side, and our center auton was always off, even though it was working perfectly at SFR. I conferred with a couple other teams during the tournament (254, and 5026), and they reported similar findings.

Near the end of the tournament, when our robot had to do center peg, I developed an alignment tool for the dashboard. The idea was, you would move the robot to the center peg, and then turn the robot on-- waiting for the gyro to calibrate-- and then move it back to the wall. Once you were at the wall, there were a couple of lights on the dashboard all in a horizontal line. These indicated how off your angle was relative to when you turned the robot on at the peg. For instance, the light in the center is on, that means you’re dead on and you’re all set to go. If the light on the inner right is on, it means you’re 2 degrees off, light in the middle right is on it means you’re 4 degrees off, etc.

Anyone got anything else?

Pray

Loved seeing you guys at SVR, definitely the right choice for first pick! I was sad that 115 couldn’t face you in the finals.
One thing that I saw Team 8 do (or appear to do) was they would ram into the peg, wait 2 seconds, and if the gear had not yet been moved it would back up and try again. That may not work for champs if you want a 2-rotor auton, but it is a good way to ensure consistency.
1072 also had a basic vision based auton working where we would just detect the two shiny rectangles and move left or right based on where their center was. It worked well at home but we had too many cameras to use them at SAC. Something similar could be done using GRIP easily I would think.

My team spent a lot of time in the offseason working with the navX libraries and grip. We can score a gear left right or center pretty constantly (In our 8 matches yesterday we got a gear on a peg in auto in 7 of them). We do this by driving in a straight line using the navX to keep us from drifting. Then we turn to a PID angle setpoint to face the goal. Then vision kicks in. This took some tuning but the speed of the robot varies inversely with the area of the two retro pieces, so as we get closer the robot slows down. The robot stops when it “detects a collision” This is a method in the navX libraries which we have found to work very well.

Using vision and the collision detection allows for a lot of error in robot setup position, and field setup and has proved quite reliable.

Could you tell us a bit more about the “detects a collision” feature? This sounds like a really neat feature.

However, I couldn’t find any information about it in the navX documentation. In specific, I just looked through the AHRS library Javadoc at Overview (navx_frc API) and couldn’t find any methods which I understood to be trying to “detect a collision.”

However, maybe I’m just looking in the wrong place, as a “collision detection” feature is mentioned at Product not found!

Can you provide a little further information to point us in the right direction?

Thanks!

We did do that indeed (the gear slider made it really easy). The first attempt was with vision tracking (slider defaulted to center if something went wrong with the vision) and then the second attempt was a predetermined position that was likely to work based previous runs of that autonomous mode on the field.

Ventura had an airship that was 5 inches too close to the wall… So yeah inconsistent fields are very much an issue.

One thing for us is that our encoder only control loops didn’t really always keep the robot going straight – sometimes it swerved to the right a bit because of wheel wear, chain tension or something similar (we aren’t exactly sure). I suspect a gyro with another PID loop to keep the robot straight may have helped and might help teams looking for a more consistent auton without vision.

We used a focusing flashlight to project a square spot of light on the carriage. I think it was around $8 for the flashlight plus around $3 for the DC/DC converter to power it. We used it on the center peg in 12 matches and it failed to put the gear on the peg once.

We used a trapezoidal motion profile with speed feed forward, gyro correction, and a position P loop to make our auto moves consistent. The velocity and position setpoints scaled over time according to a specified max velocity, acceleration, and distance. For our side peg auto, we ran a forward move like this, followed by a gyro turn to ±60 degrees depending on which side we were on. From there, it drove forwards to the peg, correcting angle as it drove using vision. After we got it tuned, it made it 5/5 at OCR, except one match where one of our three swerve wheels wasn’t moving.

I believe there are also other methods of collision detection. For instance, I’m implementing a feature in which the program compares the previous encoder values with the current ones, and uses the difference between the two to get velocity. You could do the same thing with velocity to get acceleration, and use those to determine whether or not your robot had experienced a collision. This way you don’t have to rely on the navX system (which I believe is still very accurate, but this method is better for those without reliable accelerometers).

Could the roboRio’s built-in accelerometer be used for the same purpose? I have personally never tried reading from it but I know we considered using it for collision detection last year?

For our team, we try to build our robot so that it can drive as consistently as possible. As for lining up for gears and making sure they place correctly, we eyeball (yeah, I know, I know) the peg to see if it will go straight. We use Victor SPs for our drive controllers and set those to brake mode. As for programming, we try to make the program as simple and slow as possible while remaining fast when inconsistencies could affect the outcome of gear placement.

Our team was looking into utilizing vision assist on our robot this year but the LabVIEW documentation for it was confusing and it seemed very time consuming with our limited resources.

Can’t say anything about incorrectly assembled fields, but the field is supposed to be asymmetrical, no? If you’re dead reckoning (with or without PID/motion profile) you’ll need four autos for the side pegs to differentiate between red/blue and retrieval zone /boiler side, assuming you’re using the corner of the field to align (where the alliance station wall meets the loading station /boiler).

What seems to Work.

Use Gyro Plus Encoders ( Encoders just to tell you you are at Destination )

A. Save Gyro Direction
B. Use Arcade Drive. Start Both Fixed Velocity, 0 Twist Speed. NO PID
C. Check Gyro Heading Error while moving. Adjust Twist per error. “P” only.
D. Once Encoders reach Position Counts or Distance ( Stop with 0 Twist, 10% Neg Power to Stop Inertia Momentum!)

Also Have Three Phases
Start Zone- of Start Ramp Arcade Drive Power 0 to Drive Power
Middle Zone Keep Drive Power Constant
End Zone- While Getting Close to End Ramp Down to 0 Power.

Use Encoder ( Ex. First 5 Inches Start Ramp Zone , Last 10 Inches Near End Zone).

Here is Team 3735 has been testing.

Motion profile and build in tolerance whenever possible (for example if your going for a peg, drive slightly through the peg, maybe 10 in) if possible use vision, particularly if you have time to wait and turn/strafe.

I would be very interested to know how to do this sort of thing!

Our team has a vision system running on a Raspberry Pi - we call it TrackerBox2. We see that FIRST has been making great strides in trying to make vision easier for teams using tools like GRIP, so next year we’re going to compare our home brew solution with “how FIRST expects you to do it” and see which we like better. We’re comfortable with vision, but there’s always room to make it even better and faster.

One thing we could be doing much better is taking advantage of Dashboard tools and indicators. Right now the software team uses the Java dashboard to display variable contents for debugging, and the drive team uses the Default dashboard to show a USB camera view. Any tips on how we can learn to do more custom stuff like lights and indicators?

We finally got vision working a couple days ago on our practice robot. I’m hoping that it will transfer pretty easily to the competition bot. We will see this week. We are using a pixy camera and an Arduino for vision which wasn’t terribly difficult to implement. For now we just have the robot go straight if the center of the two vision targets is within a threshold of the center of the image. It turns left or right if it’s outside the threshold but we don’t have anything with PID at this time. This helps correct for any errors. It’s the first time that 3082 has had vision.

Our “side gear and shoot” and “center gear and shoot” autonomous modes have been 100% accurate as far as positioning by using the gyro for maintaining/controlling angle, wheel encoders for measuring distance travelled, and using the PixyCam for targeting the retro-reflective targets at the pegs. (Note that we have however experienced a few failed gear deliveries due to peg issues.) The Pixy targeting definitely helps ensure accurate positioning to the airship that may change/drift due to minor differences in field setup and stretch/drift of field elements over the course of an event.

So much this.

Motion profiling and feed forward make a huge difference in consistency.

Although you probably can get a system like this to work, a properly tuned PID + motion profiled system will be more consistent and faster.

In my experience “what FIRST expects you to do” is usually pretty terrible. Not just with vision, but with pretty much everything related to programming. Looking at screenstepslive, the documentation looks better than it has in the past, but still does a pretty bad job of going over what’s actually important in making a vision system work well. It also recommends some pretty terrible setups, like doing vision on the driver station from an axis camera stream.

Why is this a terrible setup?