CTRE Talon simulation

I’m trying to write unit tests for the code, and the value of WPI_Talon_FX.getMotorPercentOutput() is always 0. In the test setUp function I did HAL.initialize(500, 1), and after set(1) the get() function of the motor returns 1, but the getMotorPercentOutput() returns 0. Why does this happen? How can I fix it?
Thanks in advance

Make sure you give the Talon time to apply and update its output. We simulate the latency that’s inherent in CAN frames, so if you set the applied output to something, you’ll have to wait up to the Status Frame period for the getMotorPercentOutput getter to see the change. This can be done by adding a Thread.sleep, or if you’re using a WPI sim construct, such as DifferentialDrivetrainSim, call the update function.

We have an example on how to use simulation in general available here https://github.com/CrossTheRoadElec/Phoenix-Examples-Languages/blob/ccbc278d944dae78c73b342003e65138934a1112/Java%20Talon%20FX%20(Falcon%20500)/DifferentialDrive_Simulation/src/main/java/frc/robot/Robot.java#L154.

My team also implemented unit tests that were used to verify the sequence of our commands, you may find it useful to base your test structure off of. You’re probably most interested in https://github.com/De-La-Salle-Robotics/Robot-2022/blob/main/src/test/java/BallHandlingTests.java

This is the code I’m using that’s not working. I don’t know what the second parameter of the HAL.initialize method means, but this is how I saw it appear in different snippets. As you can see, there are multiple sleeps, but the result I’m getting at the last print is 0.

    HAL.initialize(500, 1);
    motor = new WPI_TalonFX(0);

I’m not sure what mode 1 means for HAL.initialize, but all the documentation I see uses mode 0.

What I think is happening in your snippet is you’re waiting far too long, causing the device to disable. While it’s important to wait for device to update its CAN frame, if you wait too long the device times out on its enable and will consequently disable.
That’s why in my team’s implementation of wait, we break out the sleep into multiple periods and feedEnable between them, to ensure the device is always enabled.

As a quick workaround, I think if you brought all your sleeps down to 20ms that would be sufficient to start getting data, but you should implement a wait that feeds enable.
Something like the following:

    void sleep(long millis) {
        try {
            for(int i = 0; i < millis; i += 10) {
        } catch(InterruptedException ex) {

Thanks! I found that 50ms is working more consistently for me.
1.) I thought using the Unmanaged class is only for non-FRC use, and in the feedEnable() docs it states that

WPI_* object automatically enables actuators. Otherwise, call this to enable actuators.

So this enabling is timed out? After how much time and can it be disabled?

2.) It’s working for me, but I’m asking anyway to make sure I’m not missing anything - would calling feedEnable() with a large value like 60000 make sure that I don’t have to worry about keeping the “robot” enabled?

3.) I don’t like so much the idea of having sleeps in the unit tests, because they can
a.) slow them down
b.) make them unreliable as the amount of time to wait can depend on the machine the tests are running on
Is there any way to counter that? Maybe manually calling the update function?

I’m not 100% sure how the CTRE delays are hooked in, but assuming they use the simulation clock and notifiers, you can pause and step simulation timing rather than using the wall clock. See SimHooks (WPILib API 2022.4.1).

  1. If you are using the WPI_* versions of the classes, you don’t need to manually feed enable while simulating, because we will call the feedEnable under the hood for you. That implementation, however, relies on robot framework running, which doesn’t happen when java tests happen. Therefore, you need to manually call the feedEnable.

  2. Yes, up to a point. I believe we cap it to about a minute of enable.

  3. Unfortunately no due to the fidelity of our simulation. Since we simulate the latency of the CAN frames, you have to account for the fact that devices need time to update their status at the API layer. You can, however, speed up the frame periods, and we will properly simulate the increased frequency with decreased latencies, so that may be a passable workaround for your concerns.

I’ve found that these limitations aren’t very limiting when compared to things such as compile times. Adding a 100ms delay between actuations is sufficient for most data, and leads to our testing time taking about 7 seconds with only 1.7 seconds of that occurring due to the sleeps. If it became an issue we’d go about speeding up the frame periods and reducing the sleeps.

We currently use the wall clock, though using the sim clock is something we’ve been investigating.

Great, thanks
Another question: I see that when I give the set function the value 1, I get 1 in both the get and getMotorOutputPercent. But when I put 0.4 for example, I get 0.4 in the get function, but ~0.375 in the getMotorOutputPercent function. Is this a physics simulation? And if so, what’s the delta? So I can take it into account. And will the selectedSensor functions always return 0?

What you’re seeing is a continuous neutral deadband since you’re in percent output. If you config the neutral deadband to something very small, like 0.01, you’ll get a closer value.

You have to simulate the sensors using the SimCollection API. Make sure you take the CAN Latency into account for reading as well.