Teams that tracked game pieces, how did you use that data?

ML tracking is cool and all, but it all depends on what the robot does with that. How did you integrate machine learning-based vision into robot operations, driver controls, and/or auto modes?

1 Like

We made it so the operator could have the robot “snap” to cubes or cones at any point. The operator would press the button and the driver no longer had rotation control, instead, rotation was all put into the hands of limelight, automatically aligning the drivetrain to the nearest cone or cube (separate modes, operator could choose).

4 Likes

My team did the same, and changed my translation joystick to just a forwards/backwards robot-centric throttle while in this mode, and automatically changed back as soon as the robot sensed a game piece inside of it.

There are also some other cool things you can do with swerve (if you can detect how far away the cone is). You can first rotate to face the cone, then set your swerve point of rotation to the location of the cone, and then if u were to rotate you would be driving in a circle around the cone always facing it.

Another thing you could do is change the coordinate system from the cartesian plane to the polar plane, and take R as the distance from the center of your robot to the cone, and then use your translation joystick to move forward/backward in relation to the cone (change r) and rotate around the cone (change theta). Accomplishes the same thing as above but in a different way. You’d have to take into account that R changes as you move towards and away from the cone but that shouldn’t be too hard if your system can update the calculations with the new coordinates fast enough.

I don’t think either of those are necessary at all and probably provide no advantage in game, but i still wanna see someone try it.

TLDR: If you’re using swerve u can rotate around cone and move towards it if u do some fancy programming stuff, but very unnecessary.

1 Like

Forgot to mention. Did this too.

1 Like

We transitioned from odometry to vision to acquire game pieces from the carpet in auto. We also had the ability to center the robot rotationally to a game piece on the carpet in teleop, although we only occasionally acquired from the carpet. ML was not needed to identify and locate the simple targets in Charged Up. Straightforward programming with OpenCV image processing features was sufficient.

Swerve can rotate around any point with essentially no programming - just change the center of your robot temporarily. I don’t know how translation toward the cone would work, though. It’s easy to try.

These don’t have to be final for a special maneuver.

        private static final Translation2d FRONT_LEFT_LOCATION = new Translation2d(Constants.DRIVETRAIN_WHEELBASE_METERS / 2, Constants.DRIVETRAIN_TRACKWIDTH_METERS / 2);
        private static final Translation2d FRONT_RIGHT_LOCATION = new Translation2d(Constants.DRIVETRAIN_WHEELBASE_METERS / 2, -Constants.DRIVETRAIN_TRACKWIDTH_METERS / 2);
        private static final Translation2d BACK_LEFT_LOCATION = new Translation2d(-Constants.DRIVETRAIN_WHEELBASE_METERS / 2, Constants.DRIVETRAIN_TRACKWIDTH_METERS / 2);
        private static final Translation2d BACK_RIGHT_LOCATION = new Translation2d(-Constants.DRIVETRAIN_WHEELBASE_METERS / 2, -Constants.DRIVETRAIN_TRACKWIDTH_METERS / 2);

Take a look at Ether’s Swerve Calcs (Moon or Rotary)

1 Like

In addition to the previous uses (snap and robot centric which we used). We actually did implement this, and it worked pretty well for rotation. While a button was held the center was set to the cone/cube and the driver used the normal rotation stick for rotating around the cone. Ran out of time/was not useful enough to get strafing working but our idea was to just disable strafing left and right and only allow forward back so the angle towards the cone is correct, and then driving forward/back robot relative and constantly updating the center of the robot.

Even easier, you can use SwerveDriveKinematics::ToSwerveModuleStates like normal, but pass in a custom center of rotation .

1 Like

We gave the driver a button for getting a cone and a separate button for getting a cube. When pressed, if the piece was seen by the camera, the robot would drive up to the piece and grab it automatically. It could be on the floor or shelf. The same automation was used for autonomous.