Accurate reef alignment will undoubtedly be critical for optimizing efficiency, but with 36 possible fixed targets, what approaches are we thinking of for target designation? Obviously, having 36 buttons should probably be avoided, but drivers still need a way to specify targets for automated alignment and placement.
Some possible options:
AprilTag IDs
Using AprilTag IDs would allow the robot to be driven within visual range of a tag and the human would need to select left or right branch + height. This does come with the risk that if someone else is obstructing the tag, it will be necessary to wait.
Discrete Designation
The driver could also directly designate the specific targets, likely through a cycling selector, but this is manually intensive and requires attention away from the field.
Zone Designations
If in some way zones could be designated in extending wedges from each face of the reef, it could be simple to drive the robot into the needed area, although this could be difficult to distinguish at edges visually.
Are these reasonable? Am I overthinking this? Is there some obvious solution?
Our team has also been asking this question. One idea I had was perhaps to make a custom GUI with the reef top-down graphic such that the operator could select which position and level they wanted, then the driver could hit a button which would self-align to whatever position was selected, via AprilTags. First question would be, are operators allowed to interact with the Driver Station laptop? Second question would be if communication between the driver and the operator would be quick enough considering that the operator also may have their own controller to be worried about.
As a driver, I have had this discussion a few times with our controls team. What we ended up deciding was that it would be simple and efficient to identify an April tag, then take right and left positions. If driver input is right bumper, robot will align right of the tag, likewise for left bumper. This will require me to at least get close to the tag, but the robot will do the fine alignment. As far as levels, it will be up to our co-driver to give an input for which level, and they will have to determine when the best time to elevate is. Co-Driver will only be responsible for elevator and intake, and possibly later on a designated algae mech, so the availability of input combinations is quite large.
I was thinking about the control schema for scoring coral, and I think the best approach would be something among the lines of:
Having an “auto-align” function to the nearest reef side to the robot. This would require some sort of odometry, or perhaps just calculating the position of the robot from visible tags. This would also help the drivers use less brainpower and be more focused on the general goals of the game.
Aligning the manipulator and robot to score into a respective peg on the reef: this could be done perhaps by a joystick; move left and right to select the right/left pegs, and move up and down for the level.
Rather than defining specific “zones” you could define the points you want your robot to align with and determine which is closest to your current robot pose. This seems to me to be the simplest solution, hold down a button and align with the nearest point. You’re right it could be difficult for the driver to determine visually which point they are closest to at the edges, so you might need the driver to get close to the points before auto-aligning.
We’re making something pretty similar to that to select branch (and hopefully level) targets so that the driver only needs to use one button for automatic alignment and scoring. Having the targeting controller is nice because it lets us save cycle time by pre-rotating while we move to branches on the opposite side of the reef, opens the door to pathfinding routines that don’t rely on reef AprilTags, and we’ll still be able to automatically rotate to a target if our cameras go down. Here’s the prototype:
that looks sick, my team is hopefully going to do something just like that. We weren’t thinking about making them different colors but as @Skyehawk said it’s definitely a good idea!
First prototype concept for our new control board (empty spot is for a joystick). We are trying some mechanical keyboard switches instead of generic momentary buttons. I think the angled sides of the reef buttons should probably have the buttons rotated for easy reading.
Our team is deliberating over a button board, with 6 different buttons for each 1/6 of the reef. The robot would create a trajectory, and while following, using the pinhole model of the camera, can see which of the spots on the 1/6 of the reef are already taken by localizing the coral. If there is a spot available in L4, unless other specified with another button, it will score in L4. However, if both L4 are taken, it will switch to L3 (etc.)
There will prob be override buttons if we want to score on say L1 no matter what.
If both spots on a level that the algorithm wants to score on is open, it chooses the branch closest to it, and alters the trajectory goal.
Haven’t tested this yet since the LL model for Object detection isn’t out, but look forward to trying