How to save positions for latency compensation

I’m trying to implement latency compensation to my vision system and I’m uncertain on how to save my robot positions over time. My vision system is supposed to drive my robot to a target captured by my camera.

Since I’m using the talon srx’s motion magic feature as my controller (since it’s the only one that can dynamically create trajectories, is there a better controller for this?) I only need to send him my desired distance and desired angle and I’m good to go (leftOutput = distance + angle, right output distance - angle). Because of that, I’m not sure how to save position points, should I save them as points (x,y, angle) then what is my talon setpoint? Or maybe save position as left enc count, right enc count, and angle and send to my controller those counts plus my error from the camera at the time the position was captured, is it as accurate as saving points, because I have seen that most teams save their position as points.

I present for your consideration our interpolating history buffer from 2017. Basic idea: Every loop you make a measurement of pose (x/y/theta or whatever you like, I guess) and store it to the buffer, along with the timestamp of when the measurement was taken. The buffer hangs on to the last ~100 samples or so. A getter interface is provided where you input the timestamp you care about, and the buffer returns values linearly interpolated between the samples you provided around the requested timestamp.

Note in theory you should be able to calculate X/Y coordinates from encoder counts (as well as constants about wheel size, track width, etc.), though this is more a question of how to do odometery, rather than storing historical positions.

For odometery, this year is the first we’re biting off attempting to keep track of X/Y/Theta at all time on the robot. We know it will drift over the course of the match. But, the techniques I’ve seen most other controls-y teams use involve calculating relative x/y/theta displacement from an arbitrary origin they pick at a convenient time (ex, right as you start to align to a vision target), rather than attempting to keep accurate global field position throughout the match.

1 Like

I successfully implemented a kind of interpolating buffer and was successful at tracking my robot position as points (x,y, theta) over time but not sure how to take a position and the desired position and send it to the motion magic controller, is there a better controller better suited for this otherwise I am having trouble seeing the advantages of saving points over encoder counts of each side.

Ok, I think I gotcha. This I am not sure of. Historically we have used some sort of PathPlanner (Falcon or Jaci’s) algorithm to generate smooth velocity-over-time profiles from a set of points (relative to starting position). We then directly send these velocity profiles to the motor controllers, which are closed loop around velocity only. We’ve avoided using MagicMotion by tweaking the PathPlanner input parameters to generate gently-changing, easy-to-track velocity commands.

I know he’s busy now, but I’ll toss a jingle out to @Dkt01 and team - Did you folks end up using MotionMagic last year for controlling your drivetrain in auto?

1 Like

Last year we used SRX closed loop velocity mode for our drive train, and we used both motion magic and motion profile modes in other functions of our robot. Our code is available on GitHub (in LabVIEW, but there are screenshots of each VI if you don’t use LabVIEW and want to reference it anyway).

Our driving approach was similar to what @gerthworm described. We used Jaci’s pathfinder to generate a trajectory then followed that trajectory by target velocity. We used our IMU (z axis gyro only) to adjust the target velocities to correct for turning. We were fairly successful with this approach.

Before we settled on that approach, we did explore feeding motion profile points to our drive train (like we did for our arm control), but we ran into many issues with the two sides of the robot behaving differently over the long drive paths.

We did use motion profile mode on the SRXs to synchronize motion of our arm’s extension and rotation to result in linear motion paths within our allowed frame perimeter extension. We did this by generating a full path, then having an asynchronous loop feed points into the SRX buffers as they emptied.

Overall, I would not recommend doing motion profile points (and especially not motion magic since it is designed more for target endpoints than continuous motion waypoints) for your drive train as you will probably spend more time debugging it than closed loop velocity control and you may not get the results you are hoping for.

1 Like

Will motion profiling work as the control loop from the vision system?
Since the setpoints are constantly changing is it better to use a simple PID loop or motion magic?

In general, Motion profiling is a way of pre-processing your setpoints before sending them to a controller, to better align with the physics of the mechanism. It is what contols how the setpoints change. All of this is “upstream” of the controller (PID or similar). The controller’s job is generally to take a desired setpoint and an actual measurement, and produce actuator control signals to make the measurement get closer to the setpoint.

Better is relatively to context. In general I would advise to start simple, and add complexity only when you have exhausted the capabilities of the simple system.

That sounds like you were doing a left/right approach instead of a translation/turn approach.

What some teams do is MotionProfile each side independently. A better (correct) approach is a single robot profile on a single “master” that calculates the robot forward, and robot turn, then servo robot-distance and robot-heading (diff of two encoders or PigeonIMU). You can do that in the RIO or in the Talon.

1 Like