How do you typically swerve drive to Pose's Midgame?

Apologies for a question that might be common, but was curious of teams approaches.

Knowing where each aprilTag’s position is, seeing one should allow us to place where our robot is on the field. Then if I had another Pose2d plotted on the field, i could calculate my needed Translation2d and rotation to get the robot to that pose.

In a perfect world this seems fine, however there are things I need to take into account!

  • Areas I should not go to get there. This becomes more complicated but I need to figure out how to automatically reroute if I try to translate through a wall.
  • Some sort of feedback loop if my pose estimator gets new information and tells me I am not where I am.

Does anyone have any good examples on how to do this? it is almost like generating trajectories on the fly, but without the latency and the time sampling.

I can’t imagine I’m the first to do this. Swerve makes it an easier task because you in a way often drive by translations anyways.

Grateful for any advice!

1 Like

2910 did something like this in 2021.
Video: https://www.youtube.com/watch?v=FXpbvo9dteI
Code: 2021CompetitionRobot/src/main/java/org/frcteam2910/c2020/commands/DriveToLoadingStationCommand.java at master · FRCTeam2910/2021CompetitionRobot · GitHub

As for avoiding certain areas (like the charging station), I am keeping an eye on this project: GitHub - SleipnirGroup/TrajoptLib: Library for generating time-optimal trajectories for FRC robots. Used by the HelixNavigator path planning app.

I don’t believe there is a way in the default WPILib’s trajectory generation to avoid certain areas on the field (please corrrect me if I’m wrong). We’ve discussed doing some kind of check if we are far enough away from the April tag in one dimension, we should add certain points to travel through (alongside the charging station) to force the trajectory to avoid it.

I would take a look at Pose Estimators — FIRST Robotics Competition documentation. This class has an “addVisionMeasurement()” method that takes in a pose and a timestamp to correct the robot position while still accounting for measurement noise.

3 Likes

To avoid certain parts of the field, I think you would have to use some sort of path finding algorithm. The most common ones include A* and a few others I can’t remember the name of. You can look into them online. Normally though, I would think the chaos of a match would limit your success in trying to autonomously navigate more than a few meters.

1 Like

Interesting! Yeah i dont think youd try more than a few meters. Thanks for those links!!

You can do something like this with an X, Y, and rotation controller to go to a desired position. Here that pose is caled “goalPos” which they feed from an April Tag. You could populate it with anything.

1 Like

If you are using pathplanner you can do an on the fly trajectory%3B-,On%2Dthe%2Dfly%20Generation,-This%20example%20uses) and run to it using the PPSwerveControllerCommand

1 Like

While it might not have an amazing gain to cost ratio, I would be really interesting to layer in zebra tag software to create a heat map to further inform the pathfinding algorithm. Heck, you might even be able to scrape data from the simulator game and use that to train a robust enough drive system that could play a match without driver control. It would certainly be an interesting challenge.

I was going to share our ChaseTagCommand, but I see you already did :slight_smile: This will drive straight (within the motion profile’s ability) to the pose. Driving straight to the pose is pretty easy with swerve since you can move in any direction and rotate independently.

Our current thinking is that the driver will have a button to press that drives straight to a pose, but releasing the button will return control to the driver. The driver can do a better job avoiding obstacles, especially on a field with other moving robots and game pieces.

1 Like

We already implemented this exact feature using your command as inspiration.

Here is the more detailed implementation. My team policy prohibits general code release but my DM’s are open.

  • driver presses DPAD up, right, or left.
  • take our current pose and find the closest April Tag pose to ourselves.
  • we check the ID of that pose to see if its a grid or HP station.
  • we use that to decide what set of transforms to use.
  • we use which DPAD button to index which transform to use.
    • example: grid center is for cubes, while grid right/left is inline with cone poles.
  • apply the correct transform to April Tag pose to get a goal pose.
  • pipe goal pose to the PID controllers.
1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.