Shoot While Move Code (1706)

Hey all 1706 here,
The student programming lead and I this year had many teams come ask about our shooter code and how we got shoot while moving working.

Currently the student programming lead is working on throwing together a whitepaper to share how we kind of started programming our shooter from the beginning to where we ended up after the STL regional.

Because many teams had questions I thought I would just do a quick “dump” of the code from our ShootWhileMove Command. Once the rest of our code is “cleaned up” and ready we expect to do a full code release.

public class ShootWhileMove extends CommandBase {
    private final Shooter m_shooter;
    private final Turret m_turret;
    private final Drivetrain m_drive;
    private final ShooterHood m_hood;
    private final boolean m_updatePose;
    private final ColorSensor m_color;
    private final XboxController m_driver;
    private double m_wrongBallTime;
    private final Timer m_timer = new Timer();

    private static Point2D[] m_shotTimes = 
        new Point2D.Double[]{
            //(dist,time)
            new Point2D.Double(105,0.82), 
            new Point2D.Double(135,0.82), 
            new Point2D.Double(165,0.85),//
            new Point2D.Double(195,0.85),
            new Point2D.Double(250,1.05),
            //
        };
    private static LinearInterpolationTable m_timeTable = new LinearInterpolationTable(m_shotTimes);

    private static LinearInterpolationTable m_hoodTable = ShooterConstants.khoodTable;
    private static LinearInterpolationTable m_rpmTable = ShooterConstants.krpmTable;
    

    public ShootWhileMove(Shooter shooter, Turret turret, Drivetrain drive,ShooterHood hood, boolean updatePose, ColorSensor color, XboxController driver){
        m_shooter = shooter;
        m_turret = turret;
        m_drive = drive;
        m_hood = hood;
        m_updatePose = updatePose;
        m_color = color;
        m_driver = driver;
        addRequirements(shooter, turret, hood);
    }
    public ShootWhileMove(Shooter shooter, Turret turret, Drivetrain drive,ShooterHood hood, boolean updatePose, ColorSensor color){
        m_shooter = shooter;
        m_turret = turret;
        m_drive = drive;
        m_hood = hood;
        m_updatePose = updatePose;
        m_color = color;
        m_driver = new XboxController(4);
        addRequirements(shooter, turret, hood);
    }
    @Override
    public void initialize(){
        m_turret.enable();
        m_turret.trackTarget(true);
        m_timer.reset();
        m_timer.start();
        SmartDashboard.putNumber("SetHoodAdjust", 0.0);
        SmartDashboard.putNumber("SetShotAdjust", 0);
        SmartDashboard.putBoolean("Adjust Shot?", false);
        m_wrongBallTime = Double.NEGATIVE_INFINITY;
    }

    @Override
    public void execute(){

        double currentTime = m_timer.get();        
        boolean wrongBall = m_color.isWrongBall();
        if(wrongBall){
            m_wrongBallTime = currentTime;
        }

        SmartDashboard.putNumber("Current Time", currentTime);

        SmartDashboard.putBoolean("Wrong Ball", wrongBall);

        SmartDashboard.putBoolean("Shooter Running", true);

        FieldRelativeSpeed robotVel = m_drive.getFieldRelativeSpeed();
        FieldRelativeAccel robotAccel = m_drive.getFieldRelativeAccel();

        Translation2d target = GoalConstants.kGoalLocation;

        if(currentTime <= m_wrongBallTime+0.100){
            target = GoalConstants.kWrongBallGoal;
        }

        Translation2d robotToGoal = target.minus(m_drive.getPose().getTranslation());
        double dist = robotToGoal.getDistance(new Translation2d())*39.37;

        SmartDashboard.putNumber("Calculated (in)", dist);

        double fixedShotTime = m_timeTable.getOutput(dist);

        double virtualGoalX = target.getX()-fixedShotTime*(robotVel.vx+robotAccel.ax*ShooterConstants.kAccelCompFactor);
        double virtualGoalY = target.getY()-fixedShotTime*(robotVel.vy+robotAccel.ay*ShooterConstants.kAccelCompFactor);

        SmartDashboard.putNumber("Goal X", virtualGoalX);
        SmartDashboard.putNumber("Goal Y", virtualGoalY);

        Translation2d movingGoalLocation = new Translation2d(virtualGoalX,virtualGoalY);

        Translation2d toMovingGoal = movingGoalLocation.minus(m_drive.getPose().getTranslation());

        double newDist = toMovingGoal.getDistance(new Translation2d())*39.37;

        m_turret.aimAtGoal(m_drive.getPose(), movingGoalLocation, false);
        
            if(SmartDashboard.getBoolean("Adjust Shot?", false)){
                m_shooter.run(ShooterConstants.krpmTable.getOutput(newDist)+SmartDashboard.getNumber("SetShotAdjust", 0));
                m_hood.run(m_hoodTable.getOutput(newDist)+SmartDashboard.getNumber("SetHoodAdjust", 0));
            }
            else{
                m_shooter.run(m_rpmTable.getOutput(newDist));
                m_hood.run(m_hoodTable.getOutput(newDist));
                
            }


        if(currentTime > 0.250 && Limelight.valid()){
            double dL = Limelight.getDistance()*0.0254;
            double tR = m_drive.getGyro().getRadians();
            double tT = m_turret.getMeasurement()-Math.PI;
            double tL = -1.0*Limelight.tx();
    
            Pose2d pose = calcPoseFromVision(dL, tR, tT, tL, GoalConstants.kGoalLocation);
    
            if(m_updatePose){
                m_drive.setPose(pose);
            }
    
        }

        if(m_turret.desiredInDeadzone()){
            m_driver.setRumble(RumbleType.kLeftRumble, 1.0);
            m_driver.setRumble(RumbleType.kRightRumble, 1.0);
        }
        else{
            m_driver.setRumble(RumbleType.kLeftRumble, 0.0);
            m_driver.setRumble(RumbleType.kRightRumble, 0.0);
        }

    }

    @Override
    public void end(boolean interrupted) {
        SmartDashboard.putBoolean("Shooter Running", false);
      m_turret.trackTarget(false);
      m_turret.disable();
      m_turret.stop();
      m_shooter.stop();
      m_hood.stop();
      m_timer.stop();
      m_driver.setRumble(RumbleType.kLeftRumble, 0.0);
      m_driver.setRumble(RumbleType.kRightRumble, 0.0);
    }
    
    private Pose2d calcPoseFromVision(double dL, double tR, double tT, double tL, Translation2d goal){
        double tG = tR+tT+tL;
        double rX = goal.getX()-dL*Math.cos(tG);
        double rY = goal.getY()-dL*Math.sin(tG);
    
        return new Pose2d(rX,rY, new Rotation2d(-tR));
    }

}

Not sure if its the most precise method (just learned about pose estimator classes earlier this afternoon) but we had good luck and were able to, at times, shoot while moving at over 3m/s which was pretty cool. Things will only improve more as some of our younger programmers have some really cool ideas I want them to mess around with once we get our robot fixed back up after those tough (but so so fun!) Einstein matches.

If you have any questions please feel free to reach out!

48 Likes

For those planning to copy-paste this, WPILib main has some replacements for the lerp table and CV pose calculation parts; less code you have to test and maintain yourself.

As for the pose estimator, I suggest grabbing the version merged yesterday because it fixes performance issues when adding vision poses with large latencies.

14 Likes

Yeah we are very excited to look at these features in the latest build!

Well, the links above aren’t to builds, per se. We do have development builds though, which you can use with your robot project by following the instructions at https://github.com/wpilibsuite/allwpilib/blob/main/OtherVersions.md. Keep in mind development builds are less stable than actual releases, so if you encounter bugs, file an issue to let us know.

3 Likes

I see now, we had poor results when doing a straight trig approach for this year’s game so decided to use a lerp table and it gave better results. Good to see there will be a built in lerp table!

2 Likes

Thank you so much for posting! I’m having a lot of fun going through different teams and their techniques. It was awesome to watch how awesome you folks were in St. Louis, and how you improved through to champs!

6 Likes

Amazing example of Gracious Professionalism! This will help coding teams learn new methods!

5 Likes

Thanks for sharing your method, looking forward to the whitepaper. I had a couple of questions especially regarding the acceleration, feel free to defer to the whitepaper if necessary.

  1. Was the final value of kAccelCompFactor non-trivial?

  2. If the final value of kAccelCompFactor is non-trival, did you experiment with 1/2 * accel * t^2?

3 Likes

The value of the acceleration compensation factor was experimentally determined. We viewed the ball as an object that would have constant velocity in the x and y directions. What we noticed when we use the current velocity of the robot is that when we decelerate to a stop or took off from a stop that the turret and the shooter and hood positions would always tend to lag behind where they should be. The acceleration factor basically says how fast will the robot be moving in about 0.1s. this was kind of our way to help with issues we saw from latency from our measurements the limelight and just time it takes the ball to actually leave the shooter from when it is fed.

At St Louis we did not have the acceleration compensation factor so if we wanted to use move and shoot we would prefer to not do any moving and shooting where we had any acceleration or deceleration. This meant if the driver chose to do a stationary shot you would have to wait longer because the turret would take a second to realize that the robot was not moving anymore and we found ourselves not using our move-n shoot mode really at all besides when we used it in one of our quarterfinal matches just to kind of show it off.

I hope this helps explain a little bit of our thought process. Once we added this acceleration factor the turret would come to rest much quicker and basically allowed our cycle times for when we were not moving and shooting to equal our stationary shot mode. In Houston we were in move and shoot mode 100% of the time.

4 Likes

Nice code

Any chance the robot’s full code will be released anytime soon? I want to take a look at how your whole system operates since it looks very interesting from the little snippet you gave us.

1 Like

So we will be working on some improvements for IRI and cleaning up some things that we left a “mess” for Houston soon (IE last minute features we added and tested thoroughly but did not clean up the code before because we couldn’t do a full competition ready test after, which usually consists of running through a whole list of commands and features of the robot and can take several hours and battery swaps, basically didn’t have time but knew the code as it sat was reliable).

As it always is with the end of the school year all of the students have been focused on AP Exams, EOC Exams, and Finals as a priority. I really want give the Programming team time to review before posting the release.

Also now that we have our official IRI invitation we are putting together a plan to prep for that so I can’t say for certain but we would like to make a full code release just before the IRI competition, with hopefully even more improvements!

If you have any specific questions on how a particular function of the code works that might not be here I can snip sections and explain to the best of my ability in the meantime.

1 Like

That sounds good! I didn’t mean to ask any of your students to rush.

Good luck at IRI!

2 Likes

Going to provide an update on getting code released, we are working on updating some things for IRI. I am hopeful a full code release could come in about 2 weeks. One of our students designed a new gearbox for the turret 240 that we are going to try to get on a programmed today and then we brainstormed ways to improve the “field oriented intakes” code so the driver gets more predictable results. We will want to share all that as soon as its tested out and cleaned up.

1 Like

lol, I just finished writing field oriented intake code for our offseason robot :stuck_out_tongue:

Sounds good! No rush on the code release.

Just a quick update regarding the intakes, @Bigathedestroyer and I went back and forth and decided to try a robot motion based “Smart Intaking” System. Very happy with the results and requires only one button from the driver and the robot auto selects which intake. This is most likely what we will use moving forwards.

public class SmartIntake extends CommandBase {
    private final Intake m_rightIntake;
    private final Intake m_leftIntake;
    private final Drivetrain m_drive;
    private final Timer m_timer = new Timer();
    private boolean m_rightRunning = false;
    private boolean m_leftRunning = false;
    private double m_leftStopTime = 0.0;
    private double m_rightStopTime = 0.0;
    private double yVelAve = 0.0;

    public SmartIntake(Intake right, Intake left, Drivetrain drive) {
        m_leftIntake = left;
        m_rightIntake = right;
        m_drive = drive;
        addRequirements(left, right);
    }

    @Override
    public void initialize() {
        m_timer.reset();
        m_timer.start();
        double yVel = m_drive.getChassisSpeed().vyMetersPerSecond;

        if (yVel > 0.05) {
            m_leftIntake.extend();
            m_leftIntake.run(IntakeConstants.kSpeed);
            m_leftRunning = true;
        } else if (yVel < -0.05) {
            m_rightIntake.extend();
            m_rightIntake.run(IntakeConstants.kSpeed);
            m_rightRunning = true;
        }

    }

    @Override
    public void execute() {
        final double currentTime = m_timer.get();
        double yVel = m_drive.getChassisSpeed().vyMetersPerSecond;
        yVelAve = (3 * yVelAve + yVel) / 4.0;

        if (yVelAve > 0.5) {
            yVelAve = 0.5;
        } else if (yVelAve < -0.5) {
            yVelAve = -0.5;
        }

        if (yVelAve < -0.20) {
            if (m_leftRunning) {
                m_leftIntake.retract();
                m_leftRunning = false;
                m_leftStopTime = currentTime;
            }

            m_rightIntake.extend();
            m_rightIntake.run(IntakeConstants.kSpeed);
        } else if (yVelAve > 0.20) {
            if (m_rightRunning) {
                m_rightIntake.retract();
                m_rightRunning = false;
                m_rightStopTime = currentTime;
            }
            m_leftRunning = true;
            m_leftIntake.extend();
            m_leftIntake.run(IntakeConstants.kSpeed);
        }

        if (!m_leftRunning && currentTime > m_leftStopTime + 0.80) {
            m_leftIntake.stop();
        }

        if (!m_rightRunning && currentTime > m_rightStopTime + 0.80) {
            m_rightIntake.stop();
        }

        SmartDashboard.putBoolean(m_leftIntake.getID() + " Running", m_leftRunning);
        SmartDashboard.putBoolean(m_rightIntake.getID() + " Running", m_rightRunning);

    }

    @Override
    public void end(boolean interrupted) {
        m_rightRunning = false;
        m_leftRunning = false;
        SmartDashboard.putBoolean(m_leftIntake.getID() + " Running", false);
        SmartDashboard.putBoolean(m_rightIntake.getID() + " Running", false);
        m_leftStopTime = 0.0;
        m_rightStopTime = 0.0;
        m_leftIntake.retract();
        m_rightIntake.retract();
        yVelAve = 0.0;
        m_timer.stop();
    }
}
2 Likes

Would love some sudo code or a flowchart to help explain your desired behavior. Not always easy to understand the nuances with code!

The basic idea is that whenever the driver clicks a button, for us right trigger, the robot puts down the intake that is closest to the direction the robot is moving. We currently have it set up to where it only changes directions whenever you are moving over a certain speed. @Emerson1706 can get into the minor details but that’s the stripped down idea.

2 Likes

I see, what I implemented was using the gyro to deploy the left or right intake from the drivers perspective, the robot could be rotated 180 degrees and the same button would deploy the other intake. Of course, if you already have an intake deployed, holding the other button will deploy the other intake.

The edge case is when you’re facing almost “forwards and backwards” from the drivers perspective, in that case the buttons correspond to front and back intakes.

I contemplated intaking based on direction of movement but I decided against it since I think that trying to predict what a driver will do and acting upon that is risky in play since if it predicts wrong, the driver might be thrown off.

Great code though! Im definitely not saying what you all did is bad or wrong.

1 Like

We did exactly what you describe before and that’s what we used on hopper, there was just some use cases where it was hard to get it perfect, maybe with more work we could’ve got it better but we were really happy changing it to one button, currently the driver controls shooting, intaking, and driving all by their self which allows for less reaction time.

1 Like