Team 2910 is proud to release the code for our 2023 robot!
https://github.com/FRCTeam2910/2023CompetitionRobot-Public
Let us know if you have any questions.
Team 2910 is proud to release the code for our 2023 robot!
https://github.com/FRCTeam2910/2023CompetitionRobot-Public
Let us know if you have any questions.
ChassisSpeeds targetVelocity = ChassisSpeeds.fromFieldRelativeSpeeds(
xVelocity,
yVelocity,
angularVelocity,
drivetrain
.getPose()
.getRotation()
.plus(new Rotation2d(drivetrain.getAngularVelocity() * ANGULAR_VELOCITY_COEFFICIENT)));
What purpose does adding angular velocity * some coefficient to the robot pose do here? How did you determine the coefficient?
If we push the translation stick forward and then initiate a rotation the desired behavior would be to translate in the original direction while rotating.
In reality what happened is the robot would rotate while translating but the translation would be off at an angle. How far off is proportional to rotation speed.
This correction fixes that issue. We tuned the ANGULAR_VELOCITY_COEFFICIENT to make the robot translate in the desired direction while rotating.
There are more complex solutions to this, but we found this simple correction to work very well.
Good ole first order approximation
I’m my brief read through i didn’t notice you using feed forward characterization for your swerve drive train for auto or teleop, were you fine just using feedback values or did you institute feedforward in a different manner?
Also can you explain more about the necessity for this class? : https://github.com/FRCTeam2910/2023CompetitionRobot-Public/tree/main/src/main/java/com/pathplanner/lib
To characterize our path following feedforwards we do the following:
We have been very happy with the effectiveness of this experimental approach. It has worked well for us over the past few years.
We went through 3 different drivetrain gearings this past year (standard L3, L3_16T pinion, and L3_18T pinion). We recharacterized the path following feedforwards using these steps every time. It’s pretty quick once you have done it a couple times. We used advantage scope to view the target vs. actual position when tuning.
Earlier in the season, we had a system where we combined various PathPlanner files to form one path which the robot could follow. Each started or ended at a common control point. The reason we did this was to create a more modular system such that we could edit paths without having to change multiple files for different autos. Normally PathPlanner’s loadPath
method directly converts the path drawn in the UI into the PathPlannerTrajectory
object, but given that we were combining these UI paths in a non-conventional way we had to write some custom code that interfaced with PathPlanner’s library to ensure we had the desired behavior when we combined these path files. This is what we used that PathPlannerWorkaround
class for. Later in the season, we decided it was more intuitive to eliminate this whole process of a common control point and combination of paths and just went with independent trajectories, so we stopped using that class.
Hi. How do you guys set your arm position/extension to be accurate without using feedforward gains? I see you only use the proportional term of the PID loop.
We had always followed the same procedure of having separate paths that are joined together for a modular sort of auto routine. I was wondering, did you use path planner and their marker system in the end? If not, what led to that choice? I amAndyMark new to command based and just transitioned our robot from iterative to command based and was wondering if there was something wrong with structuring an auto-builder like this (snippet below).
The idea is that in path planner you provide the marker points to move arms/pivots, intake pieces, and prep them for scoring, etc etc etc. You create each piece as its own path and use some sort of builder method like the one below to make a huge sequential group to manage it. I have an event marker hashmap that I define in my robot container to manage the subsystems moving.
public Command fullAuto(Supplier<PathPlannerTrajectory[]> trajectories) {
Command[] commands = new Command[trajectories.get().length];
for (int i = 0; i < trajectories.get().length; i++) {
commands[i] = swerveAutoBuilder.fullAuto(trajectories.get()[i]);
}
return Commands.sequence(commands);
}
You mentioned that there were other ways to solve this issue, such as discretizing your speeds or using second order kinematics, but mention that the basic solution of a velocity coefficient worked very well for you. I was wondering if you had any measurements that you recorded for this interaction, as my team fiddled with a couple of solutions and ended up using discretize with an added multiplier that corrected for the lack of second order, or just noise in the real world… If we are overthinking it, let me know if velocity coefficients are the way to go
Edit: Here are our measurements (source)
No multiplier got use within 36 in of target over 15 ft of rotating while the multiplier of 4 got us within 6 in.
The full auto method will reset the robot pose at the beginning of each path so this is definitely not what you want. You’ll want to string together some of the other methods (stop events, follow path with events, etc) so that doesn’t happen. If this is a learning thing and not something you’re working on for any offseason competitions, I’d recommend checking out the 2024 beta over implementing this with the 2023 version. It has a modular auto system which makes stuff like this a whole lot easier.
Also interested in hearing if you guys have any feedback for this system if you’ve gotten a chance to check out the beta since what you guys did is what inspired it in the first place
We didn’t need any feedforward to be accurate. We were able to achieve high accuracy using MotionMagic.
We used PathPlanner to generate our trajectories, however we chose not to use the marker system. The main reason for this decision was that we wanted to customize how our commands were scheduled as opposed to running them at certain predefined points in the path. We wrote a lot of custom methods that allowed to run certain commands in parallel, in a specific sequence, etc. with the following of the path.
Awesome to hear that! The 2024 beta looks dope - being able to put together multiple paths into one auto is a very cool feature and makes the whole process of creating an auto really simple.
I was conveniently messing around with pp24 last night and had a ton of fun creating theoretical autonomous routines and paths, so I amAndyMark very excited for the future of autonomous in that sense.
I had noticed this when looking through your code. Would you say that the FollowPathCommand command was a sort of barrier that you wanted to get around which wasn’t possible using sequential/markers? I was wondering if you could get a similar effect by just running the commands that you want to do in parallel by defining the command in the stop event for the starting position of the path. The only downside to this is that you are unable to have a modular command for two autos that use the same path (place high, then pick up and place mid OR hybrid, for example). Correct me if I’m wrong, but PP24 should make this code look a lot cleaner due to their functionality of being able to make these routines such as followWhileIntakingWithArmHoming
abstracted to the UI rather than having to be in a storage file.
– small side note here but I just want to quickly mention that our robot’s code was iterative this year so I’m not trying to dock you for having a long autoPathStorage file, ours was just as long (and way uglier) :')
Not sure exactly what you mean here, can you please give me a bit more context? The FollowPathCommand
was not a barrier for us, it more of the fact that the marker system in 2023 was not as customizable as we wanted it to be.
Yes definitely, PathPlanner 2024 supports running other commands sequentially/in parallel while doing an auto much better. As you hinted at though, you would have to define a bunch of NamedCommands
to use in the UI, as you cannot change the parameters of a given command in the UI from auto to auto.
I was basically just hinting at what I was talking about earlier with having a time with auto paths not being modular enough for different the placements you’d want to do on the same path. Whilst coding these paths in iterative this year I had to manually create what I called “Waypoints” which were path trajectories combined with a motion for the arm and a movement for the claw to do through the path. Command based cleared this up a great deal as we used marker events for our ground intake (or yoshi for you) and then I set our end state to be some NamedCommand which was modified in our getAutonomousCommand method.
chosenAutoPath = DriverUI.autoChooser.getSelected();
if (chosenAutoPath.getName().contains("LOW")) {
AutoConstants.EVENT_MAP.put("PlaceCube", arm.getAutoStowCommand());
} else { // Assume we want to place mid
AutoConstants.EVENT_MAP.put("PlaceCube", arm.getPlaceMidCommand());
}
What are the coefficients in your armIOFalcon500 class used for in relation to setting your arm poses with motion magic?
It’s a conversion factor to make sure we are passing in the correct units to the motor.