I am trying to incorporate cargo tracking (with a Limelight) into our autonomous mode this year with the ultimate goal of 5 or 6 cargo scored. I know how to use Pathplanner/Pathweaver to make trajectories, but obviously doing it like that wouldn’t incorporate cargo tracking. Cargo tracking would allow us to still grab the ball if it was in a slightly different spot or if our odometry was a little bit off.
So I was thinking of using trajectories to get close to where the balls are supposed to be and then running the command to seek the ball right in front of us. The problem is that the robot would end up in a different spot each time after the seek command, depending on where the ball was. So I need to generate a trajectory from where the seek command ends to where the next ball is.
I know I could just use the
TrajectoryGenerator.generateTrajectory(List waypoints, TrajectoryConfig config)
method and just put in two waypoints with one as the current pose and one as the pose of the next cargo, but I am wondering if there is a better way. I feel like that wouldn’t be as smooth as a more defined trajectory.
We are using mecanum if that matters
This is the way to do it. Making this work smoothly is the job of your odometry implementation.
So basically if the odometry works well for the Pathplanner trajectories, it will work well for this? Will I need to tune anything differently or something like that?
Edit: also, will it take a non-negligible amount of processing time to compute these trajectories mid-match?
Yes. The determining factor will be the accuracy of the poses generated from your ball-tracking algorithm.
Depends on what sort of trajectory you’re computing. For simple spline trajectories with minimal curvature, no. For optimized swerve trajectories, potentially yes.
I plan on getting the distance from the limelight to the ball (like I would for the hub) and then treating that distance like a radius and using the limelight tx value to find which point on the circle it would be. Then I add the delta x and y values from that to the robot’s pose, taking into account the distance from the limelight to robot center. Thoughts?
Once I get the ball pose, should I do the same thing as above where I generate another simple trajectory? What if the ball is moving? Or do you think I should just do an x, y, and heading PID onto the pose of the ball?
I’d probably switch to a pursuit of the ball once I’m sufficiently close, and hope that the transients don’t wreck my odometry. All of this will have to be shaken out in practice, though - you might need a very refined solution, or you might be able to do a naive first-take with dead-reckoning to the first estimated ball position. I have no idea, really.
Could you explain what you mean by this? Is the pursuit the PID method I mentioned? How would it mess up the odometry?
With the PID method, I would just continually recalculate the pose of the ball with each new limelight image so that the PID would still follow it if it’s moving
Edit: Also we plan to use this same command in teleop to grab a ball we are at least somewhat close to
Yes, pursuit would be a feedback-based method. The motion is less-controlled during this phase than during preplanned routines, so it’s more likely that your odometry will drift.
Oh ok. I am not too worried about the odometry drifting because I am using a limelight to help with that too. Thanks!
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.