Our team plans to use vision to help us locate and align with the yellow totes. We plan to do run our image processing on the roboRIO, as we should only need to do analysis of 6 frames during autonomous mode. For time-saving purposes, we like to keep lots of information in the roboRIO’s RAM for debugging/frame by frame playback. I was running a ported version of the vision stuff a student had made with the cRIO, but I am constantly getting out of memory errors.
We’re using Java.
Here’s what I’ve put together so far:
The roboRIO has 256 mb of RAM, twice as much as the cRIO.
When the user code has been killed, background processes use 151 mb of the 256 mb. I don’t know why this is so much (I guess linux isn’t as small as it once was), but it’s not unreasonable, considering there’s networking, communications, web server, ftp, ssh…
This leaves 105 mb for the user code. Running an empty iterative robot project brings this down to 93 mb free. Why the controller needs 12 mb of memory to call the disabledPeriodic method is beyond me, but still, not too unreasonable.
93 mb is still a decent amount RAM. However, when I get to 60 mb of free memory, the java program is terminated.
This seems a little silly, as the code once ran on a cRIO and didn’t get an out of memory error because the cRIO let me get down to 4 mb of free RAM. I’m likely doing something stupid, but I don’t know what I’m doing. Any suggestions on how to get to the extra 60 mb of RAM?
Sounds like Java isn’t configured to use a very large heap by default. There should be a way to pass an -Xmx option to the JVM, but I’m not sure where exactly to put it in the current RoboRIO setup.
I ran into this today too, and I originally thought exactly what Jared thought, that the JVM wasn’t configured to let the java program use very much memory, but the error message states that “there is not enough memory for the Java Runtime Environment to continue”.
I checked the error log, which confirmed that the system ran out of memory, not that the heap size was too small.
Unlike magnets, I got down to 46Mb free on the Driver Station before a crash occured. Interestingly, after the crash, the Driver Station window became maximized, which let me see cool debugging information about the Driver Station.
The error logs records the output of /proc/meminfo. Perhaps somebody with more linux experience could explain what’s happening.
You should be able to add arguments be editing “WPILIB_DIR/java/current/ant/robotCommand”. I’m pretty sure this file is uploaded every time code is deployed. Also, keep in mind that this file will be overwritten when WPILib is updated.
I can confirm that increasing the heap with -Xmx does not allow the user to allocate more memory. I did confirm that setting the maximum heap to a smaller value causes a different error
Unhandled exception: java.lang.OutOfMemoryError: Java heap space at [org.usfirst.frc.team8330.robot.Robot.disabledPeriodic(Robot.java:45), edu.wpi.first.wpilibj.IterativeRobot.startCompetition(IterativeRobot.java:102), edu.wpi.first.wpilibj.RobotBase.main(RobotBase.java:234)]
Note that is it difficult to get a number for free memory with Linux. The kernel will increase the amount of memory it uses for file and network I/O when physical pages are available. But as more pages are requested from user space the kernel will back off and release memory.
Also, in user space, Linux will allow one to allocate MUCH more memory than actually exists. This is called “lazy allocation”. The maps between the virtual pages allocated and the physical pages are only made when you write to a page (page by page).
Note that you do not have to worry much about stacks space for user programs in Linux. Your threads have huge allocated virtual stack spaces (like 2MB) and the required physical pages for the stack are committed as needed. So, in effect, your stacks start at 4K (the size of one page) and grow into the size required (within reason).