In simple terms, this lines refreshes the spark max more constantly, having quicker and more accurate response from the modules. Do this for your turning and drive motor and I hope it moves smoother.
P.d. If you have your absolute encoder connected directly to your spark max, do not use the line on that motor, it’ll send you a no robot code.
Well, not rn, actually that is a good point, when processing the PID itself on the roborio as a PIDController class, the periodic frame rate on the spark actually doesn’t do much of a job. When using a SparkMaxPIDController is when it does affect the movement. We haven’t got too deep into this topic but it is a quite raw speculation. Through off-season we did a lot of changes, one was setting the PID on the rio but rn it works pretty well for us.
But also if you have a few tricks for smoothing the movement, we’ll be pleased
100 has several approaches to “smoothing.” one was adapted from 254’s SwerveSetpointGenerator, but simplified and extended. It models motor acceleration and deceleration separately (since the feasible decel is more than the feasible accel), and it also models the limited steering slew rate, preventing the drive motors from “getting ahead” of the steering angle. oh and it handles centripetal acceleration limits. this approach is “bottom up” from module states.
since theta is often not directly driver-controlled, there’s also a bit of code that does roughly what the “desaturate” step usually does, but at the level of field-relative inputs. the idea is to keep the cartesian and rotational inputs from fighting each other in the realm of controller errors.
oh we also use expo joystick mappings, which means finer control at slow speeds without giving up full-speed authority.
smoothness isn’t that hard to achieve if you’re willing to give up speed and quickness. imho getting all three requires both lots of practice, and controls that limit infeasible inputs.
Driver practice is like over 90% of this. Having seen many many students drive, I’ve yet to see one who actually has fine motor skills and can move the sticks in a fine grained, smooth way.
Honestly I can drive better than all of them.
There are a few things we’ve got that help, like rotation PID, throttle maps, and the like, but in the end, it is the person with the controller that makes the difference.
i agree with @viggy96 that motor skill is a big part of it. one thing i’ve noticed is that some of the students prefer xbox-style controllers because they’re familiar, but their movements are tiny and the sensors inside are not very accurate, so it’s a big challenge in the “fine motor skills” department. one way you can help is to make the motor skills required less fine: use a big joystick with a soft spring, or use RC-style joysticks, which have maybe 3x the travel of xbox-style joysticks, with higher-quality sensors.
another driver-aid is to drive most of the time at “medium” speed, which provides more wheel-speed headroom for combined cartesian and rotation input: it feels “smoother” if those axes feel independent. for situations where the driver wants maximum translation speed, and knows that there won’t be any leftover wheel speed for rotation, give them a “turbo” button.
This might be caused by WPILIB’s standard 20ms processing rate. It doesn’t matter how fast your SparkMax updates its internal states, WPILIB keeps processing it at 20ms regardless. You might be able to find some improvements using the addPeriodic() function in your robot class. Here is an example that might help you.
Yeah pretty much. It will update the code faster. We used it in 2023 for our vision subsytem, found it helpful. Yet I’m pretty sure there were some things we did wrong while using it. However as far as I know, its far more thread safe than 254’s looper implementations, so you it might be good solution
Nice! I’ll check it right away and test it tomorrow since in fact we are currently running our test with limelight on photonvision to align the robot to the note. I’ll update my team and let you know how it worked out.
if by “xbox-style joysticks” you mean logitech f-310s, then just upgrading to a decent quality third party (or first party) xbox controller will give a major benefit. f-310s are terrible for swerve control.
I feel like you are discrediting thousands of students who have put countless hours into bettering their driving and trying to make their team more competitive. As someone who has been driving for just one year, I can personally attest to the pure number of hours it takes to become comfortable driving, let alone in a high pressure environment, and making all that work seem basically useless doesn’t do justice to all the hard work done by the students.
Well we are using a command with 3 ProfiledPIDControllers, one for drive, strafe and rotation. Using from photonvision the area, x distance and yaw as measurements for the movement controllers respectively. It works pretty well but we encountered one problem. So when aligning the robot to the April tag for example, we set some offsets for the camera, but around the April tag there are a lot of point of the field where the setpoints are the same. I hope I explained myself correctly but as an example, we press the button to align the April tag and it aligns well but maybe a few inches to the left or to the right but always looking at the April tag.
First we saw this as a problem, but a few moments later we saw this as a solution for a better control because we want our robot to shoot from any angle on our wing, so this alignment can align the robot to the April tag from any angle.
Speaking about the note we are using the very same command but with different controllers and offsets, later today we’ll try it for the first time hoping it works
Here is part of the code, not updated to our last changes but you can give yourself an idea of how we’re doing it.
How about you, how are you guys aligning the robot to the note?
Your implementation is very interesting, I would recommend to test the PIDs very well to have it work almost perfectly.
We’re using a limelight to align to the speaker, note, stage, etc. and during auto we’ve been testing a neural network to detect the position of the note. We’ve had some really good results but we still have to tune it more. Once I have the final code, I’ll update the build thread