Synch robot action with motion profile

We tried having a human operator control secondary functions (elevator and intake) while the robot is simultaneously driving a motion profile. We record elevator position and intake speed at 10ms intervals using a periodic notifier set to a 10ms period. Both profiles have 1500 points with a 10ms duration.

Playing back both the motion profile and the recorded secondary functions simultaneously works pretty well. However, there seems to be a lag between the motion profile and the secondary functions.

Over a 15 second recording, the drive motion profile lags by about 1000ms. I tried recording the start and end times of each. They start with 2ms of each other and the secondary profile runs exactly 15000ms, drive motion profile runs about 16000-16900ms. Start time of the drive profile is set when the talon is enabled, and end time is determined by an is_last trigger. The secondary profile is started when the talon is enabled, and ends after 1500 points have been executed.

Our code is based on the CTRE template for motionprofilearc, with a pigeon for heading control. There’s nothing specifically in the code trying to synch the two routines - and I’m not how or if we could. The secondary is strictly on the periodic notifier while the talon just does its own thing with the same number of 10ms duration points.

Anyone have any thoughts on how to make this work? Or, how do other coordinate a motion profile with other robot actions?

Here’s how we set up the runnable in the motion profile constructor:

class PeriodicRunnableSecondary implements java.lang.Runnable {
	public void run() {  executeSecondaryProfile();    }
Notifier _notiferSec = new Notifier(new PeriodicRunnableSecondary());

These are the methods for running the secondary:

void executeSecondaryProfile(){
	if (ipointsec<Recorder.numptsRead){
// print the end time for diagnostics		
	else if (ipointsec == Recorder.numptsRead) {
	     System.out.println("end sec =" 
void startSecondaryProfile(){
void stopSecondaryProfile(){

Here’s the part of code in our teleopPeriodic that lets the user start the drive and secondary profiles. It’s not very streamlined yet:

//start motion profile on first push of button 5, continue running while it is held//			

else if(btns[5] && !_btns[5]) {
	boolean bMoveForward = true;  // true for forward, false for backwards !!!
// not using this
	double finalturnheading = 0;
	double finalHeading_units = Constants.kTurnTravelUnitsPerRotation * 
                 finalturnheading* -1.0; 
// This only starts the filling process, profile doesn't start executing until we set mprofile value to enable
// which is done below. getSetValue is not set to enable until the number of points filled reaches the minimum
// necessary.				
	motionprofiler.start(finalHeading_units, bMoveForward);
else if (btns[5] && _btns[5]) {
	if (!running_secondary && motionprofiler.getSetValue()==SetValueMotionProfile.Enable) {
		System.out.println("start sec = " 
	if(!running_primary && motionprofiler.getSetValue()==SetValueMotionProfile.Enable) {
	      System.out.println("start primary  = " +edu.wpi.first.wpilibj.Timer.getFPGATimestamp());
	if(running_primary && 		(motionprofiler.getSetValue()==SetValueMotionProfile.Hold|| 					motionprofiler.getSetValue()==SetValueMotionProfile.Disable)) {
		System.out.println("end primary = "+edu.wpi.first.wpilibj.Timer.getFPGATimestamp());
    motionprofiler.getSetValue().value);, FollowerType.AuxOutput1);			
else if (!btns[5] && _btns[5]) {
	drive.setMotors(ctrlmode, 0, 0);

With the SRX motion profile your only reference point is active trajectory with velocity and position. You could compare this to your data but this takes alot of horse power. What we found to work is that when you start your motion profile you determine the start time on the Rio clock. You then take the elapsed time and reference that to what point in the path your robot is in. This method can be tuned so that each individual point can have a function associated with it. It is not perfect by any means, but it works fairly well.

Thank you. We can give that a try. I also thought we could try looking at the number of points remaining in the buffers are relating that to where in the path we are.

We generally checked active trajectory position in our 2018 code.

We had tried that, and it seemed to work as long as the trajectory position didn’t repeat. Otherwise things got a little klunky.

I just tried synching the secondary playback to the number of points in the top and bottom buffer in the talon:
secondary playback point = totalPoints - pointsTopBuffer - pointsBottomBuffer

That worked really well - no noticeable lag :slight_smile: