We’re having an intermittent problem deploying code to the roboRio via gradlew when the project includes Pathweaver-generated trajectories.
There are two symptoms:
About 3 out of every 4 attempts to deploy code, we get an error message saying " Not enough memory resources are available to process this command (roborio-5687-FRC)"
About half of the times we get past the previous error, the code starts up fine but crashes when Pathfinder tries to load a path. Upon investigation it appears that the paths were included in the processing, but they do not actually show up in the /home/lvuser/deploy/paths folder on the rio.
Has anyone else encountered these problems? Any suggestions for how to troubleshoot the issue?
Note that if we strip out the paths and pathfinder from the project, we don’t get the “memory resource” error.
How many paths do you have?
What time step are you using?
How many points in a path?
What is the total csv file size?
By loading, do you mean csv to Trajectory?
If you reduce the number of paths, does anything change?
I think I may have tried the first and done the second by trying to load at different points in the startup routine. If you find something that works better, I’m interested in trying it myself at a future team meeting.
At this point, I’m resolved to either trying a different motion profile library other than PathFinder (TrajectoryLib) or wait for PathFinder2 to come out.
Also, I’m hoping Pathweaver gets a fix to the issue with it’s heading being reversed, which is something we had to code around.
The error from GradleRIO usually means that your computer is struggling to send the paths to the RoboRIO since it can’t buffer them. Check the size of your paths files, they shouldn’t be too large. If that still doesn’t work, build with --stacktrace and submit a bug report to the GradleRIO repository so we can solve this in the proper place.
Update: reducing the paths didn’t help the deploy problem, but switching to USB eliminated it altogether. I suspect that issue is unique to my laptop.
Upgrading pathfinder eliminated the crashes, although now there’s a short but perceptible delay at that stage of the auto, so my guess is that pathfinder is retrying the load if it fails. It does not impact our auto, so I consider that issue resolved.
The usb will load slower so is likely why it is working. Are you loading both competition paths? We had a total of 8 but depending on start position and second hatch or not, only 3 max would be needed in any match. Once the match options were chosen on Shuffleboard , the needed trajectories were all loaded using a Task from files to Trajectories by a button. They were made active sequentially during Sandstorm and no delays were seen.
During code test, it would intermittently hang if the Task was run several times but the cause was never uncovered. May well have been this same problem. We did have a java file.exists() in the load code. With 20 ms times the files were in the 150 to 170 segments long max.