FRC 4481 Team Rembrandts | 2024 Build Thread | Open Alliance


Shooting on the Move

As we enter the sixth week of build season, we are looking more and more into optimizing our performance as much as possible.

During auto testing and driver practice, we noticed that we were losing considerable time to stopping, shooting and accelerating. Since we want to minimize our cycle time as much as possible, minimizing the time to perform these actions would be crucial.

That’s why our software sub-department started developing a system for shooting while moving. To do this, we need to compensate in two dimensions.

Aiming Steps

To determine the dimensions in which we need to compensate, it’s good to first take a step back and look at the steps required for auto aiming

  1. Make sure the Shamper faces the speaker
  2. Determine the distance to the speaker
  3. Determine setpoints based on this value

For steps 1 and 2, we need to compensate separately. We compensate by adding an offset to the current position based on the robot velocity. This essentially mimics the robot being in the future, at the moment when the Note exits the Shamper.

Current Implementation - Carthesian Coordinates

Our current implementation is based on the carthesian plane, with the x dimension directly facing outward, and the y dimension facing left from the blue driver station.

  • The simulated distance from the speaker to the robot is influenced by the movement in the x direction.
  • The simulated angle offset of the robot looking at the speaker is influenced by movement in the y direction.

This works as a crude estimation, but the system falls apart when using at different angles to the speaker.

Future Implementation - Polar Coordinates

The obvious – and mathematically correct – improvement would be to switch the system to polar coordinates. This would change the (x,y) coordinates to (r, φ), with r representing the length of the vector from the speaker to the robot, and φ representing the angle from the vector representing r and the normal vector of the speaker.

This way, the auto aiming can be influenced more accurately in the following way.

  • The simulated distance from the speaker to the robot is influenced by the movement in the r direction.
  • The simulated angle offset of the robot looking at the speaker is influenced by movement in the φ direction.

Test Videos


Written by:
@Nigelientje - Lead Outreach / Software Mentor
@Jochem - Lead Software
@Casper - Software Mentor

30 Likes