Hi all, I’ve been working on my team’s common library for a bit now and have begun to imponent pure-pursuit autonomous, the only problem is that I need a robust way of not only accurately locating our robot on the field, but also the locations/speeds of other robots and game pieces. The solution I arrived on (rather begrudgingly) is either stereoscopic or LIDAR based SLAM. However, so far as I can tell there are not many great resources out there that offer a sort of “general guide”. So, does anyone have any advice or suggestions for good resources?
FYI, there’s better drivetrain controllers for differential drives like Ramsete (WPILib 2020+) or LTVUnicycleController (expected WPILib 2023). Swerve can just use 3 PD controllers (x, y, and heading).
Pure pursuit doesn’t offer direct control over heading, so it rounds off corners, it won’t stay on the trajectory around corners if it’s already on it, and it has bad end behavior at the ends of trajectories that have to be worked around by stopping the controller early or artificially extending the trajectory as the tangent to the last waypoint. The controllers linked above have none of those problems.
Take a look at the pose estimator classes (WPILib 2020+) for robot localization.
You could detect game pieces or robots using either PhotonVision’s colored shape detection or machine learning with Axon. The former is more efficient if you can define a working classical vision pipeline for the thing you’re trying to track. Machine learning is the fallback when the relationships for object detection aren’t clear.
You may have trouble sourcing hardware to run Axon though.
You could use ComputerVisionUtil (expected WPILib 2023) to turn the object detections into robot-relative poses, which you can then make field-relative using the current pose estimate from the pose estimator mentioned earlier.
I’ve heard of using Factor Graphs and GTSAM | GTSAM for SLAM, but it’s certainly not batteries-included for FRC. Controls-wise, we’re currently focusing on easy-to-use, fast mathematical program solvers for 2023 since that has applications for better drivetrain trajectory generation, but utilities to make SLAM easier are on our roadmap.
This post of mine is also relevant if you want to know the current state of FRC robot automation in general. Some of the stuff mentioned there got implemented recently.
Thanks for the resources! Tbh the reason I bought up pure-pursuit and SLAM was actually because we have already implemented nearly all of what you mentioned (innumerable thanks to you and the rest of the WPILib team). it’s just that I’m attempting to create a sort of “adaptive” autonomous for obstacle avoidance and curtain teleop situations. We currently use a custom implementation of WPILib’s trajectory class that rapidly regenerates trajectories based on sensor input than treats them as one continuous trajectory, a solution that I am generally unhappy with, thus the trend towards a more flexible solution for navigation, especial since SLAM can detect other robots.
Worth noting that SLAM is on the roadmap for future WPILib functionality; if you want, you can try to organize your work with a mind towards helping contribute to that over the next few years.
Seconding this because we’d appreciate it. The work on mathematical program solvers started as a Kotlin project by 2363 (they’ve been innovating on prior trajectory optimization solutions and trying to make them reach real-time performance, or at least better than state-of-the-art). Now, they’ve been helping with R&D and implementation for a WPILib version written in C++ with Java bindings, and the initial results are very promising.
SLAM lets you map obstructions in a systematic way. Also, having an a priori field map theoretically makes it easier to perform loop closures because you have more features to potentially match up (thousands instead of just a handful done manually).
That’s my point. If you have the map why do you need to do localization and mapping “simultaneously?” Why can’t the map be built ahead of time and then you just localize to the map without full SLAM?
I think this is the source of my confusion. SLAM is specifically the problem of online map building and then localizing yourself within that map.
If you have an a priori map (whether 2D or 3D or any other representation) and you want to know where you are in that map, that problem is simply localization. If you want to detect dynamic obstacles and plan paths around them, that’s just a perception/world state estimation/planning problem.
Depends on whether you want the obstacles to be part of the map or not. If they are, that’s localization and mapping using the same data structure, hence SLAM.
Conventional SLAM algorithms don’t include dynamic obstacles as part of the map, since you don’t want to be localizing with respect to something that can move around. Dynamic obstacles are generally tracked separately and planning can be with the combined map and obstacles.