Strategies for Target Designation

Accurate reef alignment will undoubtedly be critical for optimizing efficiency, but with 36 possible fixed targets, what approaches are we thinking of for target designation? Obviously, having 36 buttons should probably be avoided, but drivers still need a way to specify targets for automated alignment and placement.

Some possible options:

AprilTag IDs

Using AprilTag IDs would allow the robot to be driven within visual range of a tag and the human would need to select left or right branch + height. This does come with the risk that if someone else is obstructing the tag, it will be necessary to wait.

Discrete Designation

The driver could also directly designate the specific targets, likely through a cycling selector, but this is manually intensive and requires attention away from the field.

Zone Designations

If in some way zones could be designated in extending wedges from each face of the reef, it could be simple to drive the robot into the needed area, although this could be difficult to distinguish at edges visually.

Are these reasonable? Am I overthinking this? Is there some obvious solution?

5 Likes

Our team has also been asking this question. One idea I had was perhaps to make a custom GUI with the reef top-down graphic such that the operator could select which position and level they wanted, then the driver could hit a button which would self-align to whatever position was selected, via AprilTags. First question would be, are operators allowed to interact with the Driver Station laptop? Second question would be if communication between the driver and the operator would be quick enough considering that the operator also may have their own controller to be worried about.

1 Like

Yes, there is no distinction between the driver and operator.

6 Likes

As a driver, I have had this discussion a few times with our controls team. What we ended up deciding was that it would be simple and efficient to identify an April tag, then take right and left positions. If driver input is right bumper, robot will align right of the tag, likewise for left bumper. This will require me to at least get close to the tag, but the robot will do the fine alignment. As far as levels, it will be up to our co-driver to give an input for which level, and they will have to determine when the best time to elevate is. Co-Driver will only be responsible for elevator and intake, and possibly later on a designated algae mech, so the availability of input combinations is quite large.

3 Likes

So if your robot is where the red arrow is, you can choose either left or right bumper to choose branch A or B?
image

2 Likes

I was thinking about the control schema for scoring coral, and I think the best approach would be something among the lines of:

  1. Having an “auto-align” function to the nearest reef side to the robot. This would require some sort of odometry, or perhaps just calculating the position of the robot from visible tags. This would also help the drivers use less brainpower and be more focused on the general goals of the game.
  2. Aligning the manipulator and robot to score into a respective peg on the reef: this could be done perhaps by a joystick; move left and right to select the right/left pegs, and move up and down for the level.

Exactly. And if i’m one face to the downside, left would go to C and right would go to D

1 Like

Our team has flirted with this idea…

Would be an insane looking button board for sure

10 Likes

Rather than defining specific “zones” you could define the points you want your robot to align with and determine which is closest to your current robot pose. This seems to me to be the simplest solution, hold down a button and align with the nearest point. You’re right it could be difficult for the driver to determine visually which point they are closest to at the edges, so you might need the driver to get close to the points before auto-aligning.

2 Likes

We’re making something pretty similar to that to select branch (and hopefully level) targets so that the driver only needs to use one button for automatic alignment and scoring. Having the targeting controller is nice because it lets us save cycle time by pre-rotating while we move to branches on the opposite side of the reef, opens the door to pathfinding routines that don’t rely on reef AprilTags, and we’ll still be able to automatically rotate to a target if our cameras go down. Here’s the prototype:

Edit: A targeting controller is also pretty useful for memorizing branch names! You can setup a game to practice reacting to callouts.

12 Likes

2025-01-16 14-53-39

22 Likes

Good call on the colored buttons.

Others should take note of that detail. Allows the operator to see which side of the reef a button is with peripheral vision.

I would also add a few strategically placed nubbins (like on the home row of a keyboard, called tactile markers) to aid in touch based orientation.

8 Likes

Would you be willing to share your code for that? I got something semi-close but there’s some weird bugs.

that looks sick, my team is hopefully going to do something just like that. We weren’t thinking about making them different colors but as @Skyehawk said it’s definitely a good idea!

1 Like

Cool!
How did you add these lines to advantage scope? Did you draw them in the cad and import a custom field?

Code release will be end of season but I’ll post a lay-man logic to implement this on my team (4065)’s instagram.

3 Likes

Thanks! I used poses to create a trajectory and logged it to display it.

1 Like

First prototype concept for our new control board (empty spot is for a joystick). We are trying some mechanical keyboard switches instead of generic momentary buttons. I think the angled sides of the reef buttons should probably have the buttons rotated for easy reading.

19 Likes


:eyes: :eyes:

21 Likes

Our team is deliberating over a button board, with 6 different buttons for each 1/6 of the reef. The robot would create a trajectory, and while following, using the pinhole model of the camera, can see which of the spots on the 1/6 of the reef are already taken by localizing the coral. If there is a spot available in L4, unless other specified with another button, it will score in L4. However, if both L4 are taken, it will switch to L3 (etc.)

There will prob be override buttons if we want to score on say L1 no matter what.

If both spots on a level that the algorithm wants to score on is open, it chooses the branch closest to it, and alters the trajectory goal.

Haven’t tested this yet since the LL model for Object detection isn’t out, but look forward to trying