Error when deploying pathweaver-generated paths to roboRio

We’re having an intermittent problem deploying code to the roboRio via gradlew when the project includes Pathweaver-generated trajectories.
There are two symptoms:

  1. About 3 out of every 4 attempts to deploy code, we get an error message saying " Not enough memory resources are available to process this command (roborio-5687-FRC)"
  2. About half of the times we get past the previous error, the code starts up fine but crashes when Pathfinder tries to load a path. Upon investigation it appears that the paths were included in the processing, but they do not actually show up in the /home/lvuser/deploy/paths folder on the rio.

Has anyone else encountered these problems? Any suggestions for how to troubleshoot the issue?
Note that if we strip out the paths and pathfinder from the project, we don’t get the “memory resource” error.

Thanks in advance for any suggestions!

More information such as

How many paths do you have?
What time step are you using?
How many points in a path?
What is the total csv file size?
By loading, do you mean csv to Trajectory?
If you reduce the number of paths, does anything change?

would help.

FYI, a couple of us chased this kind of thing for a while during the build season and came up empty.

Thank you, I don’t know how I missed that thread in my search! I see a couple of possible solutions, assuming @Jaci hasn’t already found a way to catch the error internally:

  1. Explicitly run the gc right before we load the paths. I’m not a fan of this since System.gc() can take time and may not actually free any memory–but it’s better than crashing!
  2. Try to prune out unneeded flotsam and jetsam from our project to reduce memory usage. It doesn’t seem like our code should be using very much memory, but this is probably worth exploring.

I think I may have tried the first and done the second by trying to load at different points in the startup routine. If you find something that works better, I’m interested in trying it myself at a future team meeting.
At this point, I’m resolved to either trying a different motion profile library other than PathFinder (TrajectoryLib) or wait for PathFinder2 to come out.

Also, I’m hoping Pathweaver gets a fix to the issue with it’s heading being reversed, which is something we had to code around.

Two things:

  • Pathfinder now has checked exceptions for Java. Update your vendordeps file using this one here and see what if it comes up with something more readable. https://imjac.in/dev/maven/frc/7194a2d4-2860-4bcc-86c0-97879737d875?view

  • The error from GradleRIO usually means that your computer is struggling to send the paths to the RoboRIO since it can’t buffer them. Check the size of your paths files, they shouldn’t be too large. If that still doesn’t work, build with --stacktrace and submit a bug report to the GradleRIO repository so we can solve this in the proper place.

1 Like

Thanks @Jaci. I have already updated our project to 2019.2.19 so I’ll test that out this afternoon and switch to 2019.3.06-UNSTABLE if we still have problems.

I do still have a couple of test paths in the project so I’ll pull those out and see if having just the 2 competition paths helps. If not, I’ll grab a stacktrace and submit a bug report.

Thanks as always for your incredible projects and for your continuous support!

Update: reducing the paths didn’t help the deploy problem, but switching to USB eliminated it altogether. I suspect that issue is unique to my laptop.

Upgrading pathfinder eliminated the crashes, although now there’s a short but perceptible delay at that stage of the auto, so my guess is that pathfinder is retrying the load if it fails. It does not impact our auto, so I consider that issue resolved.

Thanks again for the help!

The usb will load slower so is likely why it is working. Are you loading both competition paths? We had a total of 8 but depending on start position and second hatch or not, only 3 max would be needed in any match. Once the match options were chosen on Shuffleboard , the needed trajectories were all loaded using a Task from files to Trajectories by a button. They were made active sequentially during Sandstorm and no delays were seen.
During code test, it would intermittently hang if the Task was run several times but the cause was never uncovered. May well have been this same problem. We did have a java file.exists() in the load code. With 20 ms times the files were in the 150 to 170 segments long max.