When I speak with the coders, I might as well be speaking to Martians. I barely get the basics. I am trying to understand the process of vision assisted driving, driving to a certain position by using the vision targets.
I understand how vision processing can determine robot position, but what happens after that? Does the driver push a button and the computer takes over and guides the robot in to position? Does the driver control the robot and the computer sort of nudge the robot when needed? Is there a best practice for this?
FWIW they use Java and hope to have a Limelight soon but that seems irrelevant to the question.
Thank you
It all depends on how your programmers program your robot, but for my team, the answer would be yes. We last used vision in 2017 for gear placement. I would line up with the peg (loosely) and would press ‘A’ The robot would take complete control from me and drive itself onto the peg. One aligned I would regain control and drop the gear and go on with the match.
This all depends on the implementation, which is dependent on the task at hand. For example, with shooters, it’s common for the driver to get the robot to the right spot, then the vision code kicks in to automatically turn in place and adjust the shooter angle/power. If the shooter is on a turret, some teams like to keep the vision centering it on the target at nearly all times.
For games like this year’s, where precision placement is involved, vision usually means something like the above, combined with driving forward. Placement itself may be automatic or may let the drivers press a button to finish as a sort of final verification.
The driver never losing control isn’t something I’ve really heard of. They generally do, though you should have some sort of escape in case it’s needed.
Also - encourage communication! What the programmers want isn’t necessarily what will be best on the field, especially if they’re new to vision. As a driver/programmer, I realized my first year that the what I imagined myself wanting in terms of vision assistance wasn’t what I really needed. If you have a practice field, make sure your driver is giving feedback on what’s helpful and what’s not, and what could be done to make it better. As with any software assistance feature, if the drivers can’t comfortably use it, it’s… well, not going to be used, lol.
One way to think about it is to relate the vision target information to other types of sensors.
The X-axis of the vision field acts like a gyroscope and the Y-Axis (or area) can act as a distance sensor like an ultra-sonic might. If you program some basic PID loops around these assumptions you can get the robot to keep itself turned to a particular angle and driving to particular distance away from your target. You can have the driver be in control of the speed while the ‘gyro’ nudges as you suggest and you can use the range-finding to prevent crashing
The other alternative is to use motion-profiling and path planning to calculate a path to your target and not use the vision output directly as part of the PID loop. The reason you might want to do that is that the output you’re getting from the LimeLight won’t be 100% consistently valid. It will jump around, flicker on and off, etc. It’s still possible, but you might want to think about smoothing the data somehow, if you aren’t able to use MP.
I’m speaking to these things from a position of still working through these issues as well, so you can take it with a grain of salt.
The LimeLight has been a great little camera for my team this year, so far. It has gotten us to the point where we have off-board vision processing that works by plug-and-play, but that just gets us some decent data to start with that we never had before. What we end up with for vision code this season is probably just going to be something fairly simplistic and I expect we’ll need to spend time in the off-season becoming more proficient.
By the way, I keep this link on a bookmark for reference. The really good stuff starts on slide 37.
Also the LL docs are pretty good information.