Hi so I watched 4414’s behind the bumpers (from charged up) and something that I found really interesting was their button board. When i first heard about it I was completely stunned as I didn’t know you could generate trajectory like that and I was wondering if I wanted to get started with trajectories where would I start because i’m not able to find a ton of resources on this topic.
My team didn’t have much experience using trajectories like this due to COVID, but we figured it would be very useful this year especially with certain positions visited so frequently. We already used pre-programmed trajectories using PathWeaver for our autos, so we already had the code to follow trajectories. WPILib actually has great built-in classes to generate trajectories on robots that we discovered this year. This link is a good starting point to see how to generate and follow trajectories. Depending on your drivetrain, you may want to use Path Planner instead if you have swerve. (my team doesn’t have swerve so I don’t really know how trajectories with swerve work, but it’s basically the same)
Then to generate trajectories to go from your current location to another, you also need to have some sort of position localization. Since we had April Tags this year, teams used cameras mounted at specific points on the robot. You can use the Limelight or PhotonVision to do this. My team used Photon Vision with a Raspberry Pi 4 and a USB camera. You just need to calibrate it once and then you’re good to go! Then you need to add a Pose Estimator to combine these vision measurements with odometry to create a prediction for the current position of the robot.
WPILib uses Pose2d to store positions and headings on the field, so the trajectory generator and pose estimator use these data types.
Once you have where you are, you need to know where you want to go. For this, my time stored the location of every scoring node on the field and used the buttons on our joystick to select which position we wanted. There might be a better way to store these positions, but this was the easiest for us at the time.
Then you can pass these positions (including any other waypoints you want it to stop at on the way) into the trajectory generator to create a path to follow. The last thing to do is to run the trajectory following command, then you’re all done!
I know this is a lot, but this is everything that my team did to get this system working. If you need anymore help, feel free to reach out!
Great reply by @AkeBoss. Just would like to add the following tool as something I personally found very helpful to log and debug my code: AdvantageScope, by team 6328. I think it is a great tool to log data from every subsystem on your robot, and highly recommend watching the presentation video by Jonah in the link above. Getting trajectory generation and vision working can be quite challenging but having this debugging tool will definitely help you keep track of what data is being collected and see if there are any discrepancies.
Yeah definitely I agree. Simulation is another really good way to see if your trajectory generation code is actually going to work. We are going to use Advantage Kit next year to make it easier to run simulated versions of our code too. I love 6328 lol. My favorite team.
Here’s a gif of a simulation I used to test our code before we actually ran it. We tested our code the day before competition and thankfully it all worked fine because we simulated it before. But logging and data analysis is really useful to understand how it should and did work.
Two big things to make this happen:
- Get the location and velocity of the robot.
We use LL3s and the pinhole camera model shown here to calculate distance to each tag, we then combine that distance with the rotation of the robot and known location of each tag to get an estiamted robot position. That position is fused with odometry using wpilibs SwerveDrivePoseEstimator.
Getting the (field relative) robot velocity can be achieved by deriving your odometry position, or using some of the built chassis speed helpers.
- Generate and follow trajectories
Plug the robots current position and current velocity into WPILIBS trajectory generator. Follow the path however you would in auto (if your not already generating and following trajectories in auto, thats a much better place to start, and some great resources linked above)
There was a lot of time spent on the initial setup and then improving our localization methods, the trajectory generation worked relatively flawlessly the whole season.
P.S. We ran an incredibly simplified version of our auto drive at both offseason events with a ton of success, it was about 10% the effort and performed equally if not better at times. It used the same pinhole method described above but drove in a straight line to the closest target using PID. The whole button board + on the fly trajectory generation was a crazy 1% optimization rabbit hole that we went down. We tend to go down a lot of these (12 motor swerve another example) never know which ones will pay off. A big challenge of FRC is figuring out where to apply your teams time and resources, and this was definitely one of the lowest ROI optimizations we made.
Thank you guys for the comments I will definetely look into what wpilib has on the matter because I didn’t know about the libraries until now. I just want to be clear on how it work so basically we can use something like PathFinder that auto generates trajectories for us given a end point,startpoint and velocity. For the start Point you have to figure out your position which I’ve read before as being able to be done with traingulation so it really depends on where we place our co processors (Right now the optimal setup I have planned is one LL3 aand two raspberry pis) and place it in a way that we have two targets at all time and then triangulate the positon. With that we just plug in the startPoint and then if we know the endpoint we just plug that into the trajectory generator.
Was there a specific reason you used this method instead of Limelights 3D AprilTag Tracking?
After trying both extensively, the pinhole model just gave far better results.
I assume this will always be the case unless you have a extremely dynamic environment (non known target locations, moving targets, LOTS of targets at many different heights)
One thing that we found super useful this year was combining auto-alignment with the driver’s controller, which made it so that we could control the robot while it was moving autonomously. This meant that we needed to have a bit of a different routine for following paths in teleop, but it worked very similarly to how it would in auto. We used a holonomic drive controller to get speeds for the bot to follow using the same PID gains as we would in autonomous, except we combined the output of those speeds with that of the driver’s desired speed, which allowed the alignment to control how the robot moved on the Y axis of the field (towards the scoring table) while letting the driver control the robot’s position on the X axis of the field (towards the driver station).
Here’s what it looks like in action, notice that the bot doesn’t move on the field’s X-axis until it is lined up, because the driver didn’t want the bumpers to hit the grid.
Is there a reason you guys calculated the robot’s position relative to the tags by hand instead of using the built in tag localization that comes with the limelight’s?
Hello @jjsessa @GalexY
Why do you decide to use WPILib instead of Pathplanner? I’m think about which one should I use, so can you give me some your experience and suggestions?
The WPILib trajectory generation algorithm is very much designed around differential drive constraints. For a swerve, it’s almost always going to be better to just go directly to your target instead of approximating the type of path a non-holonomic robot is forced to take.
It’s still probably worth using a profile along the direct path so there aren’t sharp “kicks” in your driving signals, though.
Both 2019 and 2023 greatly benefited from NOT doing this… There are tons of clevar ways to fake this a few others we tried:
- Bias the PID gains to reach 0 y-axis error faster.
- Drive to a point a few ft infront of the target before driving to the final goal.
Both didnt handle edge cases as well, such as driver FULLY at the wrong scoring location when beginning autodrive.
Would love to see a better algorithm out of the box for this, but I think the current traj implementation gets you much closer in a lot of cases than not, especially with some very simple manual constrains (dont start auto driving if the driver is driving fast, away from the goal)
People have been working on this for a while, but it’s a very difficult problem and it depends on a ton of hyper-parameters (you have to tell the optimizer what it is you actually care about) that make it hard to support “out of the box” sensibly.
The WPILib implementation uses clamped cubic splines, primarily because they’re really easy to compute and they maintain some invariants that are helpful for unicycle drives. For a holonomic drive, the “search space” is waaaaaaaaaay bigger and a more-flexible/less-analytic approach (e.g. line-segment paths and pure pursuit) is probably a good idea.
Our auto alignment gave the power to the driver to use the D-Pad to traverse the grid, in case they want to change where they are scoring. Clicking the D-Pad left or right would change our desired position to go one node down that direction. It would also skip over parts of the grid that we couldn’t score on, depending on whether we had a cone or a cube. Here’s what that looks like:
It’s just an alternative to having a full grid of buttons for the operator to use (which was super creative by the way, I really liked that).
We used WPIlib for on-the-fly things and Pathplanner for actual autonomous. The reason we decided to go with that decision (after testing both) was that using a combination of driver controller speeds and alignment speeds was best done by working with a raw holonomic drive controller, which is ultimately what PathPlanner would use if you are making a trajectory on the fly. We are just simplifying the process by jumping right to the calculation.
We’ve been working on our own trajectory creation, with custom coprocessor pathplanning (leveraging PathPlannerLib). A “final approach” distance can be specified, and the program will automatically calculate the start point of the final approach based on the desired destination pose.
Works well when activated while the driver is manually driving too.
Yellow ghost is the start point of the “final approach”, that the robot is pathfinding to, and the green ghost is the desired final pose.
It has been a while and i’ve spent a ton of time researching about the trajectory generation and we’ve been working on the auton trajectory where we have two obviously defined points and so I was wondering how we would go on to the teleop trajectory generation. Our swerve code contains a getPose method which gives our current position (We are running the MAX Swerve java template) and so would we just set the points we want to go to and make trajectories like that?
Is it possible for you to share this fully?
I’ve been thinking about how this could be done because I found 2D aprilTag tracking to be more robust on LL than 3D. I’ve done the calculations for distance from a 2D target using that same method before, but I’m wondering how you approached the calculating the final pose from the known tag positions.