Robot Loses Mind (Drives Autonomously)

During the 2024 FRC competition season, our team encountered an issue with our robot during the state competition. Sometimes, it would stop listening to the driver’s input and drive in random directions. There was no perceivable pattern to the driving, it seemed entirely random. Our driver’s said that sometimes during the early stages of the issue, they were able to partially control the robot. Only the drive train was effected. The issue only occurred while the robot was using vision in the teleoperated period, but didn’t always occur while it was using vision. We removed the code that linked a button to using vision, and the issue didn’t occur.

The strangest thing about this issue, was that we encountered it before the competition, so reverted back to the branch in our code that had never had this issue before, but the issue kept occurring at the same frequency. Thusly, it is likely that the issue could be electrical.

We are using an Orange Pi 5 and a global shutter camera for vision. We replaced the driver station and controllers, but the issue is still occurring. We have never been able to recreate the issue on purpose, but can easily recreate the issue when we don’t want to.

Here is the vision code we use:

Vision:

/*

  • MIT License
  • Copyright (c) PhotonVision
  • Permission is hereby granted, free of charge, to any person obtaining a copy
  • of this software and associated documentation files (the “Software”), to deal
  • in the Software without restriction, including without limitation the rights
  • to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
  • copies of the Software, and to permit persons to whom the Software is
  • furnished to do so, subject to the following conditions:
  • The above copyright notice and this permission notice shall be included in all
  • copies or substantial portions of the Software.
  • THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  • IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  • FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  • AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  • LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  • OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
  • SOFTWARE.
    */

package frc.robot;

import static frc.robot.Constants.VisionConstants.*;

import edu.wpi.first.math.Matrix;
import edu.wpi.first.math.VecBuilder;
import edu.wpi.first.math.geometry.Pose2d;
import edu.wpi.first.math.geometry.Rotation2d;
import edu.wpi.first.math.numbers.N1;
import edu.wpi.first.math.numbers.N3;
import edu.wpi.first.math.util.Units;
import edu.wpi.first.wpilibj.smartdashboard.Field2d;
import frc.robot.Constants.VisionConstants;
import frc.robot.subsystems.DriveSubsystem;

import java.util.List;
import java.util.Optional;
import org.photonvision.EstimatedRobotPose;
import org.photonvision.PhotonCamera;
import org.photonvision.PhotonPoseEstimator;
import org.photonvision.PhotonPoseEstimator.PoseStrategy;
import org.photonvision.PhotonUtils;
import org.photonvision.simulation.PhotonCameraSim;
import org.photonvision.simulation.SimCameraProperties;
import org.photonvision.simulation.VisionSystemSim;
import org.photonvision.targeting.PhotonPipelineResult;
import org.photonvision.targeting.PhotonTrackedTarget;

public class Vision {
private final PhotonCamera m_camera;
// private final PhotonCamera cameraNote;
private final PhotonPoseEstimator photonEstimator;
private double lastEstTimestamp = 0;

// Simulation
private PhotonCameraSim cameraSim;
private VisionSystemSim visionSim;

public Vision(PhotonCamera camera) {
    m_camera = camera;

    photonEstimator = new PhotonPoseEstimator(
            kTagLayout, PoseStrategy.MULTI_TAG_PNP_ON_COPROCESSOR, m_camera, kRobotToCam);
    photonEstimator.setMultiTagFallbackStrategy(PoseStrategy.LOWEST_AMBIGUITY);

    // ----- Simulation
    if (Robot.isSimulation()) {
        // Create the vision system simulation which handles cameras and targets on the
        // field.
        visionSim = new VisionSystemSim("main");
        // Add all the AprilTags inside the tag layout as visible targets to this
        // simulated field.
        visionSim.addAprilTags(kTagLayout);
        // Create simulated camera properties. These can be set to mimic your actual
        // camera.
        var cameraProp = new SimCameraProperties();
        cameraProp.setCalibration(960, 720, Rotation2d.fromDegrees(90));
        cameraProp.setCalibError(0.35, 0.10);
        cameraProp.setFPS(15);
        cameraProp.setAvgLatencyMs(50);
        cameraProp.setLatencyStdDevMs(15);
        // Create a PhotonCameraSim which will update the linked PhotonCamera's values
        // with visible
        // targets.
        cameraSim = new PhotonCameraSim(m_camera, cameraProp);
        // Add the simulated camera to view the targets on this simulated field.
        visionSim.addCamera(cameraSim, kRobotToCam);

        cameraSim.enableDrawWireframe(true);
    }
}

public PhotonPipelineResult getLatestResult() {
    return m_camera.getLatestResult();
}

public PhotonPipelineResult getLatestCameraResult() {
    return m_camera.getLatestResult();
}

/**
 * The latest estimated robot pose on the field from vision data. This may be
 * empty. This should
 * only be called once per loop.
 *
 * @return An {@link EstimatedRobotPose} with an estimated pose, estimate
 *         timestamp, and targets
 *         used for estimation.
 */
public Optional<EstimatedRobotPose> getEstimatedGlobalPose() {
    var visionEst = photonEstimator.update();
    double latestTimestamp = m_camera.getLatestResult().getTimestampSeconds();
    boolean newResult = Math.abs(latestTimestamp - lastEstTimestamp) > 1e-5;
    if (Robot.isSimulation()) {
        visionEst.ifPresentOrElse(
                est -> getSimDebugField()
                        .getObject("VisionEstimation")
                        .setPose(est.estimatedPose.toPose2d()),
                () -> {
                    if (newResult)
                        getSimDebugField().getObject("VisionEstimation").setPoses();
                });
    }
    if (newResult)
        lastEstTimestamp = latestTimestamp;
    return visionEst;
}

/**
 * The standard deviations of the estimated pose from
 * {@link #getEstimatedGlobalPose()}, for use
 * with {@link edu.wpi.first.math.estimator.SwerveDrivePoseEstimator
 * SwerveDrivePoseEstimator}.
 * This should only be used when there are targets visible.
 *
 * @param estimatedPose The estimated pose to guess standard deviations for.
 */
public Matrix<N3, N1> getEstimationStdDevs(Pose2d estimatedPose) {
    var estStdDevs = kSingleTagStdDevs;
    var targets = getLatestResult().getTargets();
    int numTags = 0;
    double avgDist = 0;
    for (var tgt : targets) {
        var tagPose = photonEstimator.getFieldTags().getTagPose(tgt.getFiducialId());
        if (tagPose.isEmpty())
            continue;
        numTags++;
        avgDist += tagPose.get().toPose2d().getTranslation().getDistance(estimatedPose.getTranslation());
    }
    if (numTags == 0)
        return estStdDevs;
    avgDist /= numTags;
    // Decrease std devs if multiple targets are visible
    if (numTags > 1)
        estStdDevs = kMultiTagStdDevs;
    // Increase std devs based on (average) distance
    if (numTags == 1 && avgDist > 4)
        estStdDevs = VecBuilder.fill(Double.MAX_VALUE, Double.MAX_VALUE, Double.MAX_VALUE);
    else
        estStdDevs = estStdDevs.times(1 + (avgDist * avgDist / 30));

    return estStdDevs;
}

// ----- Simulation

public void simulationPeriodic(Pose2d robotSimPose) {
    visionSim.update(robotSimPose);
}

/** Reset pose history of the robot in the vision system simulation. */
public void resetSimPose(Pose2d pose) {
    if (Robot.isSimulation())
        visionSim.resetRobotPose(pose);
}

/** A Field2d for visualizing our robot and objects on the field. */
public Field2d getSimDebugField() {
    if (!Robot.isSimulation())
        return null;
    return visionSim.getDebugField();
}

public double getChosenTargetRotation(int targetID) {
    var result = getLatestCameraResult();
    // Get a list of all of the targets that have been detected.
    List<PhotonTrackedTarget> targets = result.getTargets();
    double rotation = 0;

    // For each target we have check if it matches the id you want.
    for (PhotonTrackedTarget target : targets) {
        if (result.hasTargets()) {
            if (target.getFiducialId() == targetID) {
                // Use the value of target to find our rotation using the getYaw command
                return target.getYaw();
            }
        } else {
            rotation = 0;
        }
    }

    return rotation;
}

public double getChosenTargetRange(int targetID) {
    var result = getLatestCameraResult();
    List<PhotonTrackedTarget> targets = result.getTargets();
    double range = 0;
    if (result.hasTargets()) {
        for (PhotonTrackedTarget target : targets) {
            if (target.getFiducialId() == targetID) {
                range = PhotonUtils.calculateDistanceToTargetMeters(
                        CAMERA_HEIGHT_METERS, // Previously declarde
                        TARGET_HEIGHT_METERS,
                        CAMERA_PITCH_RADIANS,
                        Units.degreesToRadians(target.getPitch()));
                return range;
            }
        }
    }
    return 0;
}

public double getRange() {
    var result = getLatestResult();
    double range;
    if (result.hasTargets()) {
        // First calculate range
        range = PhotonUtils.calculateDistanceToTargetMeters(
                CAMERA_HEIGHT_METERS, // Previously declarde
                TARGET_HEIGHT_METERS,
                CAMERA_PITCH_RADIANS,
                Units.degreesToRadians(result.getBestTarget().getPitch()));
        return range;

    } else {
        // If we have no targets, stay still.
        return 0;
        // When this is implemented - DO NOTHING IF RANGE IS 0
    }
}

public double getYaw() {
    var result = getLatestCameraResult();
    double yaw = 0.0;
    if (result.hasTargets()) {
        // Calculate angular turn power
        // Remove -1.0 because it was inverting results.
        yaw = result.getBestTarget().getYaw();
    }
    return yaw;
}

public double getPitch() {
    var result = getLatestCameraResult();
    double pitch = 0.0;
    if (result.hasTargets()) {
        // Calculate angular turn power
        // Remove -1.0 because it was inverting results.
        pitch = result.getBestTarget().getPitch();
    }

    return pitch;
}

}

Note Detection Subsystem:

// Copyright (c) FIRST and other WPILib contributors.
// Open Source Software; you can modify and/or share it under the terms of
// the WPILib BSD license file in the root directory of this project.

// Copyright (c) FIRST and other WPILib contributors.
// Open Source Software; you can modify and/or share it under the terms of
// the WPILib BSD license file in the root directory of this project.

package frc.robot.subsystems;

import static frc.robot.Constants.VisionConstants.kCameraNameNote;

import org.photonvision.PhotonUtils;

import edu.wpi.first.math.controller.PIDController;
import edu.wpi.first.math.util.Units;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import edu.wpi.first.wpilibj2.command.SubsystemBase;
import frc.robot.Constants;
import frc.robot.Vision;

public class NoteFinder extends SubsystemBase {
final double CAMERA_HEIGHT_METERS = Units.inchesToMeters(12);
final double TARGET_HEIGHT_METERS = Units.feetToMeters(0);
// Angle between horizontal and the camera.
final double CAMERA_PITCH_RADIANS = Units.degreesToRadians(0);

// How far from the target we want to be
final double GOAL_RANGE_METERS = Units.feetToMeters(0);

// Change this to match the name of your camera
// CHANGED CAMERA NAME TO A CONSTANT

//static PhotonCamera camera = new PhotonCamera("Microsoft_LifeCam_HD-3000");

//static PhotonCamera camera = new PhotonCamera(Constants.kCamName);

// PID constants should be tuned per robot
final double LINEAR_P = 0.1;
final double LINEAR_D = 0.0;
PIDController forwardController = new PIDController(LINEAR_P, 0, LINEAR_D);
final double ANGULAR_P = 0.05;
final double ANGULAR_D = 0.0;
PIDController turnController = new PIDController(ANGULAR_P, 0, ANGULAR_D);

DriveSubsystem m_drive;
Vision m_vision;

public double getRotation()
{
var result = m_vision.getLatestResult();
double rotation;
if (result.hasTargets())
{
// Calculate angular turn power
// Remove -1.0 because it was inverting results.
rotation = -turnController.calculate(result.getBestTarget().getYaw(), 0) * Constants.kRangeSpeedOffset;

} else {
// If we have no targets, stay still.
rotation = 0;
}
return rotation;
}

// HEADER - METHOD TO FIND DISTANCE FROM TARGET
public double getRange()
{
var result = m_vision.getLatestResult();
double range;
if (result.hasTargets()) {
// First calculate range
range =
PhotonUtils.calculateDistanceToTargetMeters(
CAMERA_HEIGHT_METERS, // Previously declarde
TARGET_HEIGHT_METERS,
CAMERA_PITCH_RADIANS,
Units.degreesToRadians(result.getBestTarget().getPitch()));

            // THE FOLLOWING EQUATION CAN BE USED TO CALCULATE FORWARD SPEED
            // Use this range as the measurement we give to the PID controller.
            // -1.0 required to ensure positive PID controller effort _increases_ range
            double forwardSpeed = -DriveSubsystem.turnController.calculate(range, GOAL_RANGE_METERS);
            return forwardSpeed * Constants.kRangeSpeedOffset;
        } else {
            // If we have no targets, stay still.
            return 0;
            // When this is implemented - DO NOTHING IF RANGE IS 0
        }

}

// This is a method it get distance

public double getDistance()
{
var result = m_vision.getLatestResult();
double range;
if (result.hasTargets()) {
// First calculate range
range =
PhotonUtils.calculateDistanceToTargetMeters(
CAMERA_HEIGHT_METERS, // Previously declarde
TARGET_HEIGHT_METERS,
CAMERA_PITCH_RADIANS,
Units.degreesToRadians(result.getBestTarget().getPitch()));

            // THE FOLLOWING EQUATION CAN BE USED TO CALCULATE FORWARD SPEED
            // Use this range as the measurement we give to the PID controller.
            // -1.0 required to ensure positive PID controller effort _increases_ range
        } else {
            // If we have no targets, stay still.
            return 0;
            // When this is implemented - DO NOTHING IF RANGE IS 0
        }
  return range;

}

/** Creates a new Navigation object when used. */
public NoteFinder(Vision vision) {
m_vision = vision;

}

@Override
public void periodic() {
// This method will be called once per scheduler run
SmartDashboard.putNumber(“Sim-Robot (Vision) Speed”, getRange());
SmartDashboard.putNumber(“Sim-Robot (Vision) Rotation”, getRotation());
}

public void setDriveController(DriveSubsystem robotDrive) {
m_drive = robotDrive;
}

public int NearestNotePosition() {
return 1; // Placeholder for when vision is utilized in here to find a notes position
}
}

Note Detection Command:

// Copyright (c) FIRST and other WPILib contributors.
// Open Source Software; you can modify and/or share it under the terms of
// the WPILib BSD license file in the root directory of this project.

package frc.robot.commands;

import edu.wpi.first.wpilibj2.command.Command;
import frc.robot.Constants;
import frc.robot.subsystems.DriveSubsystem;
import frc.robot.subsystems.Intake;
import frc.robot.subsystems.NoteFinder;

public class MoveToClosestNote extends Command {
/** Creates a new MoveToClosestNote. */
private final NoteFinder m_NoteFinder;
private final DriveSubsystem m_drive;
private final Intake m_IntakeSystem;

public MoveToClosestNote(NoteFinder finder, DriveSubsystem drive, Intake intake) {
// Use addRequirements() here to declare subsystem dependencies.
m_drive = drive;
m_NoteFinder = finder;
m_IntakeSystem = intake;
;
}

// Called when the command is initially scheduled.
@Override
public void initialize() {
// This stops the command if note finder can’t find note; Seth is forcing me to write fancy.
if(m_NoteFinder.getRotation() == 0 && m_NoteFinder.getRange() == 0) {
this.cancel();
}
}

// Called every time the scheduler runs while the command is scheduled.
@Override
public void execute() {
double rotation = m_NoteFinder.getRotation();
double speed = m_NoteFinder.getRange();

if(speed > 4)
{
  speed = 4;
}
else if (speed < -4) {
  speed = -4;    
}
speed = Math.abs(speed);
speed += Constants.kDefaultNoteFinderSpeed;


if(Math.abs(rotation)  > 0.05)
{
  m_drive.drive(speed, 0, -rotation, false);
}
else if(speed != 0){
  m_drive.drive(speed, 0, -rotation, false);
  if(speed < 2) {
    m_drive.drive(2, 0, -rotation, false);
  }
  
}
else {
  m_drive.drive(Constants.kDefaultNoteFinderSpeed, 0, 0, false);
} 

}

public boolean isFinished() {
return m_IntakeSystem.NoteIsPresent();
}

// Called once the command ends or is interrupted.
@Override
public void end(boolean interrupted) {}

// Returns true when the command should end.
}

Full code:

Do you have any driver station logs from an occurrence of this?

Driver station logs are typically one of the best places to first look when it appears things could be related to packet loss, CAN issues, battery/power related etc.

More info here if you aren’t familiar - Driver Station Log File Viewer — FIRST Robotics Competition documentation

We have multiple full logs of instances when the issue occurred. Unfortunately, they are ridiculously clean. We can’t find anything wrong.

Thanks for the help though.

The next step might be to turn on onboard logging (WPILib logger, or DogLog looks good). Make sure to log everything about the Drivetrain and the joysticks. Then try and capture a failure.

How are you powering the OPI?

If you’re referring to the Orange Pi, we use a power bank to power it. We have tested unplugging it while the robot was using vision, but were unable to replicate the issue.

I see one thing in your code that could cause the behavior you describe while this button binding is active:

    new JoystickButton(m_gunnerController, Button.kRightBumper.value)
        .whileTrue(new GrabNote(m_NoteFinder, m_robotDrive, m_robotIntake)
        .andThen(new NoteInPlace(m_robotIntake))
        .andThen(new BackwardsIntake(m_robotIntake, m_robotShooter).withTimeout(.1))
        .andThen(new ControllerRumble(m_gunnerController).withTimeout(0.2)));

The movement is accomplished via MoveToClosestNote (part of GrabNote). In MoveToClosestNote.initialize(), you check for no target and cancel if there is no target. In MoveToClosestNote.execute(), you make no such check. So, if you initially have a target and then lose it for whatever reason (obstruction, another robot picks the note up, etc), the command continues to execute without a target.

While continuing to execute, the code uses the 0 rotation and speed values you get back and it looks like the speed is adjusted at line 52 to 1 m/s and then again to 2 m/s at line 62. I think this would cause a straight line movement on the current robot relative heading of 2 m/s. Is this what you are seeing?

I also noticed that MoveToClosestNote is not adding its requirements (it should). This could mean that the default drive command is continuing to run at the same time. This could explain the driver’s ability to partially control the robot.

If this is it, first add the requirements. Second, a simple fix would be to cancel if you lose the target. However, this could lead to unnecessary cancelation for an intermittent target loss. You may want to debounce the target loss case in execute (that is, only cancel after some number of consecutive no target readings). During the debounce period, you could use the last known good reading values. You would need to add MoveToClosestNote instance variables to track the debounce count and the last good vision values.

3 Likes

That bug was added intentionally because the robot’s camera would lose sight of the note when it got too close. Thusly, we programmed it so that the robot would keep going straight after it loses sight of the note, but would stop when the trigger was released. During a regional competition where we ran this code, it worked as intended.

When the issue triggers, the robot never drives straight or in any sort recognizable pattern. The other issue is that the drivers cannot stop the robot by letting go of the trigger even though the command is run by the whileTrue() method.

As far as adding requirements, we did notice that and corrected it while testing, but the issue was not resolved. I don’t know why that fix isn’t present in that branch. We’ll fix that.

First, my thought about it driving straight could be a misreading. Also, that releasing a whileTrue binding not stopping it is very odd. How about putting a couple of log entries or simple prints in the code? One in execute that logs only if 0 comes back for the speed and a second in end(boolean) that simply verifies that you got there on the trigger release.

Not getting to end(boolean) would point us in one direction and getting there but not stopping in another. If you do get to end(boolean), you may want to add a line of code in there to stop the drive.

Second, your line in MoveToClosestNote.initialize() to cancel itself will not do anything meaningful. The MoveToClosestNote command is a member of a command group. The scheduler only knows about the group, but not its member commands. The scheduler will ignore this cancel request and the MoveToClosestNote command will keep running even if no note is present at initialize(). When the command group ends, it should still call end(boolean) on members that have not already ended. This would make it easier to get into the mode of trying to move without a target, but does not explain how a whileTrue bound command would not stop on release.

Also, can you point us to a match where this happened? I would like to take a look at the video.

Yall aren’t too far away in Raleigh, if needed I would love to come by one day and see how I may be able to help. @Seth-Gearcats6500

First, is that the issue with 1:17 left in the match? Second, is the turn with 1:10 left the partial control the driver mentioned, or is that more of the same issue? Third, do you not move anymore because the command in question has not terminated so that the drive default command is either not running (you had your requirements at that time), or is not responsive?

From the state of the field (where notes were sitting), my guess is that it was not seeing a note at the time. Maybe it was momentarily confused that something else was a note over by the wall.

Also, as follow up to my last comment on the cancel in the initialize method not doing anything, I failed to offer a way to work around the issue. One way is when you decide there is no note to target (in initialize or execute), is to clear a haveTargetNote boolean to false, and that then causes isFinished to return true (or !haveTargetNote with the statement already there). Be sure to initialize haveTargetNote to properly in initialize to either true or false.

Are you using the colored object pipeline or the object detection pipeline to detect the note?

First, the issue does occur with about 1:17 left, second, the drivers had already lost all control by 1:17 during this match, third, the robot was emergency stopped and that was why it was sitting there. The drivers were unable to stop the issue despite releasing the note finding trigger.

As far as the robot not seeing a note or chasing a note that was not there, we tried two tests yesterday to check if that was the issue, but the robot operated as expected.

Looking at the code now but I wanted to mention that we had an issue like this. Our robot would randomly drive in a random direction and stop taking inputs. It took us a week to figure out the problem.

It was a cracked cancorder case on one of our swerve modules. It caused the sensor to randomly float above the magnet and returned stupid headings in the billion range.

The problem was the swerve code we were using at the time was using a loop to normalize that value between 0 and 360. It takes a while to normalize a few billion when you are counting 360 at a time. It was an effective infinite loop, and the motors would continue executing their last command.

We found the problem by profiling the code and logging any suspected values. First we fixed the normalization code to not use a loop, then we started looking at why we were getting bad values.

So my recommendations would be to look for any loops, verify parameters, and run a profiler on your code.

1 Like

Do you still have the logs from this match?

Also, wrt…

When did they lose control (do you mean at 1:17 or sometime earlier) and what were they doing at that point in time?

We’re using the built in note detection pipeline that comes with Photonvision.

The issue had seems very similar to ours. Just to clarify, did your robot always have weird driving problems, or would it just occur randomly?

Completely random, but usually when driven quickly. Anything to jostle the can coder and have the sensor throw a bad value.