We have a real mystery on our hands. We tuned our flywheel for shooting. Values work great, give it a target RPM, it ramps up and holds that RPM value. Shooting is consistent in teleop.
However, when operating the same command, same subsystem, and same PID values in autonomous the flywheel is slow to reach RPM and often never reaches the target RPM. We’ve graphed the RPM in teleop and auton. Changing the PID values we can get the correct RPM in auton, but then when we switch back to telop the values are all to high and we overshoot the RPM.
We are using two falcons one as a follow motor. We are using similar code to our flywheel for the at home challenges were we did not have this problem.
I don’t think it’s a problem with initializing, because we do that once in robotInit().
I don’t think it’s CAN, because how would that only change during auton? And our CAN is (currently) stable.
I don’t think it’s power, because we’re running the same systems in both auton and telop and there’s not a noticeable difference in voltage auton vs teleop.
Configuration doesn’t change at teleop start.
My next guess is scheduling? Maybe something is getting scheduled during auton that is interrupting the shooting command? Could this be a motor safety issue?
What is the battery voltage use shown in driver station in auton versus teleop? It sounds like your battery is being hogged by other components. Try to isolate the shooting from other systems (swerve drive can be a big culprit).
Same battery, same subsystems running (as far as I can tell), no compressor running in either case, no noticeable difference in voltage from DS view. We can put a fresh battery in, run auton, auton fails to get RPM, immediately reenable in teleop without redeploying code or restarting code and the RPM is good. Then re-run auton without resetarting or reenabling and the RPM is bad.
Looking at the DS logs, we’re pushing the CPU usage to 80% or more. Seeing a number of “CommandScheduler loop overrun” errors during autonomous. I’m guessing this is SuperBad ™.
Next step is to start disabling subsystems until we find the culprit.
Any suggestions for profiling the command scheduler in Java to find where all the CPU cycles are going?
Our team has an in-house robot manager, so I can’t say anything about the command scheduler.
What are you doing in autonomous that is different from teleop? PIDs eats away at cycles if they’re not on the motor controller, but you would be seeing that in teleop aswell.
I would try disabling all the features and adding stuff one by one until you start getting the loop overrun.
Could be paths that are causing this issue? Are you loading the trajectory continuously from disk rather than caching? I don’t know your robot, but are there any fancy features that take a lot of processing?
Looks like Shuffleboard.updateValue() is the source of all of our autonomousPeriodic overuns. Taking 20-30 ms just for that. How do we disable or lower the CPU usage of shuffleboard?
We were able to hunt down and remove a log ot SmartDashboard logging. The SmartDashboard layout of the motors that SDS swerve-lib creates was a huge CPU hog. We move the instantiation of some of our subsystems until after Auton and we eliminated all put a pair of loop overuns that happen at the very start of auton. However, we still see the issue with our shooter PID behaving differently in auton vs teleop.
Going to try deploying with the 2022.4.1 WPILib maybe we’re getting hit with a weird deadlock issue.
2022.4.1 wasn’t the silver bullet I was hoping for.
I think I know what the cause might be. We’re seeing some weird command scheduling problems. We can schedule the shoot command in teleop with a button, but then it refuses to be scheduled again. We can configure multiple buttons with the same command and the command works exactly once per button until restarting robot code.
Looks like this was a problem with the scheduler, using whileHeld() was interacting badly with our shoot command and must have been interrupting the motor input often enough to keep the flywheel from reaching the target RPM and firing. We could “fix” it by overloading the kP so that when it restarted the command it would already be at the correct RPM.