Specific to machine learning, a few resource:
Wpilib is supporting Axon this coming year, but I have not yet tried it.
Team 900 has been playing in this space for years and has some cool results. However, a lot of what they do is focused on pushing the boundary of technology, with longer term goals. That’s not necessarily the same as “most robot improvement for least input effort”. They’re far from the only ones, just folks I thought of offhand.
Still, the results do end up looking pretty sweet.
I have little personal experience with the machine learning approach unfortunately.
Taking one step back, aligned with what Veg was getting at:
When the object of interest is of known size, relatively uniform color, and contrasts well with the background, an approach requiring the algorithm to learn from iteration can be inefficient - the parameters of detection are already known, they do not need to be learned. From this perspective - if you haven’t yet, try looking into some of the more “traditional” techniques of filtering for yellow objects on the field, and using solvePNP to get real-world coordinates of the object. From there, path planning can be used to properly align the robot to the object.
As Jason mentioned, photonVision can do the pose identification part. I’m sure limelight won’t be far behind.
Taking one more step back, to where Jason was poking:
If your true goal is “most points for least effort”, solving the “intake inconsistent” issue in hardware is definitely preferable. This would free up your time to improve other aspects of the robot with software.
Not saying this is easy. Convincing the humans with the relevant skills to spend time improving it is rarely trivial. A software fix may be the best answer in your particular situation.
But be careful - “fix hardware defect with software” can be a slippery slope for many reasons.