Autonomously Driving to Goal Pose in Teleop

This year, our team wants to take the next step in improving our software. After reading limelight and PhotonVision documentation as well watching videos on how it works, we are thinking about implementing a swerve drive pose estimator to fuse vision and odometry. We ultimately plan to do this so that we are able to use our robot pose and April tags to autonomously drive to a scoring position on the reef during teleop. But…. we’re not sure what the best way is to implement this, and we want to get an idea on how to do it. Does pathplanner have something that can create a path to a scoring position that we can implement? Or would it be better to implement something with a profiled PID controller that uses the robot’s current pose and desired goal pose?

We’ve seen documentation from limelight and PhotonVision on aligning based off of looking at an April tag. The issue we have with it is that it requires the robot to continuously be looking at a specific April tag for its alignment. We know that as we get closer to a scoring position on the reef, our camera won’t spot the April tag it’s using for alignment anymore, which makes it hard to align to a position solely relying on it. We could be wrong in this thought process, so we’re open to any suggestions and explanations on what to do. Code examples would be appreciated as well. Thanks. :slight_smile:

3 Likes

Assuming you have decent odometry, using a regular PID controller should be good enough for aligning yourself with a scoring position. Additionally, if you want to be able to go a scoring position while avoiding obstacles, you can use repulsion pathfinding.

We have both pathfinding implemented in our repo, so feel free to check it out. If you have any questions about the stuff on there lmk.

8 Likes

Ok, I understand how the code is setup. We probably want to use auto alignment when we’re more or less around 3 meters from the reef scoring position the driver intends to be at (Basically already being at one of the 6 sides of the reef but not aligned). This gets me to my question: Do you think that auto aligning without repulsion path planning will still properly/accurately get us to our goal pose? I understand that repulsion path planning keeps you away from hitting the field obstacles, but is it intended on being used when a robot isn’t at one of the 6 sides of the reef already or does it also have purpose when you’re already at one of the 6 sides?

Oh… and also, like I said we only plan to use auto alignment when we’re more or less already there and don’t have any obstacles to go through to get to our goal position. Because of that, do you think using the April Tag ID our vision sees at the branch is a good alternative to your implementation of having buttons to trigger which reef branch and position we’d want to be at?? Hopefully that made sense… felt like I’m rambling :sob:. And thank you for your input and help, too!

Kinda like what is already in the YAGSL-Example with PV? :slight_smile:

Auto Aligning without repulsion pathfinding will properly get you to the goal pose. Repulsion pathfinding is there to circumvent collisions when the robot can not directly go to the goal pose in a straight line.

Repulsion pathfinding isn’t meant to be used when the robot is at one of the 6 sides. Regular PID auto align would work best in that situation.

In our code, we project a direct line from the robot’s current pose to the goal pose and check if there will be a collision with the reef. If there is not a collision, we use regular PID auto align. If there is an collision, we use repulsion pathfinding until a collision is no longer detected. Once no collision is detected, we switch over to regular PID pathfinding.

IMO, using April Tags is not a good solution because your cameras may not be able to pick up the April Tags if your robot is oriented weirdly. Instead, I would recommend to just have a button that aligns to the closest branch.

2 Likes

Ya! That getAprilTagPos() method seems like something we could probably implement for creating our goal poses. Thanks for letting me know about it

Ohhhh Ok that makes sense, I really appreciate your help with this. We’ll def explore the idea of just having a button that aligns to the closest branch. Seems to make sense more.

Repulsion field planning is undergoing active R&D in the FRC Discord and AFAIK has never been tested on a real bot. Use at your own risk, and make sure to check up on latest developments every so often.

1 Like

Actually, here’s the algorithm running (python code, not the exact linked codebase above):

It’s not perfect but it’s on our “nice to have” list currently.

It hasn’t seen testing on a real field, critically.