One part of it was that we ended up not being able to characterize the drivetrain reliably because of the NEOs and prioritized the rest of our code (primarily superstructure code) over rewriting our auto stack. Another reason, which we realized early on, is that if we wanted to use vision for actions such as the
AutoSteerAndIntakeAction class, we would not necessarily be able to predict the end velocity of our robot after this action, so would not be able to pre-compute trajectories on start up. One thing we did try in the first couple weeks of build season was to compute trajectories on-the-fly, but this was too time consuming. Therefore, we wanted to use a follower that we didn’t have to spend too much time on and knew would work with these constraints, which was our adaptive pure pursuit controller.
I’m not sure exactly what you mean. Are you asking about teleop driving or about how our autonomous works?
We did not use our drive characterization because we did not end up using the nonlinear feedback follower from 2018, so the drive characterization was not required.
The subsystem tests were not used as much this year. I believe that the ones that are currently in the code were used around Bag & Tag time to test the robot before bagging. In the pit before matches, our systems check consists of checking all sensors (encoders, gyro, Limelights, etc.) for direction/values and then testing all driver controls including driving, superstructure movements, and climbing to make sure that everything behaves as expected.
The other thing we did was every time we replaced a motor, we would use Phoenix Tuner to verify that it was the correct direction and was working correctly.