RoboRIO won't let us use more than 50mb of RAM?

Our team plans to use vision to help us locate and align with the yellow totes. We plan to do run our image processing on the roboRIO, as we should only need to do analysis of 6 frames during autonomous mode. For time-saving purposes, we like to keep lots of information in the roboRIO’s RAM for debugging/frame by frame playback. I was running a ported version of the vision stuff a student had made with the cRIO, but I am constantly getting out of memory errors.

We’re using Java.
Here’s what I’ve put together so far:
The roboRIO has 256 mb of RAM, twice as much as the cRIO.

When the user code has been killed, background processes use 151 mb of the 256 mb. I don’t know why this is so much (I guess linux isn’t as small as it once was), but it’s not unreasonable, considering there’s networking, communications, web server, ftp, ssh…

This leaves 105 mb for the user code. Running an empty iterative robot project brings this down to 93 mb free. Why the controller needs 12 mb of memory to call the disabledPeriodic method is beyond me, but still, not too unreasonable.

93 mb is still a decent amount RAM. However, when I get to 60 mb of free memory, the java program is terminated.

This seems a little silly, as the code once ran on a cRIO and didn’t get an out of memory error because the cRIO let me get down to 4 mb of free RAM. I’m likely doing something stupid, but I don’t know what I’m doing. Any suggestions on how to get to the extra 60 mb of RAM?

Sounds like Java isn’t configured to use a very large heap by default. There should be a way to pass an -Xmx option to the JVM, but I’m not sure where exactly to put it in the current RoboRIO setup.

I ran into this today too, and I originally thought exactly what Jared thought, that the JVM wasn’t configured to let the java program use very much memory, but the error message states that “there is not enough memory for the Java Runtime Environment to continue”.

I checked the error log, which confirmed that the system ran out of memory, not that the heap size was too small.

Unlike magnets, I got down to 46Mb free on the Driver Station before a crash occured. Interestingly, after the crash, the Driver Station window became maximized, which let me see cool debugging information about the Driver Station.

The error logs records the output of /proc/meminfo. Perhaps somebody with more linux experience could explain what’s happening.

---------------  S Y S T E M  ---------------

OS:Linux
uname:Linux 3.2.35-rt52-2.0.0f0 #1 SMP PREEMPT RT Tue Jun 3 20:49:19 CDT 2014 armv7l
libc:glibc 2.17 NPTL 2.17 
rlimit: STACK 256k, CORE 2048k, NPROC 1852, NOFILE 4096, AS infinity
load average:2.34 0.80 0.28

/proc/meminfo:
MemTotal:         237120 kB
MemFree:           46784 kB
Buffers:               0 kB
Cached:            68316 kB
SwapCached:            0 kB
Active:            97544 kB
Inactive:          34200 kB
Active(anon):      85224 kB
Inactive(anon):     1768 kB
Active(file):      12320 kB
Inactive(file):    32432 kB
Unevictable:       28304 kB
Mlocked:           28356 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         91820 kB
Mapped:            46288 kB
Shmem:              2432 kB
Slab:              20768 kB
SReclaimable:       9904 kB
SUnreclaim:        10864 kB
KernelStack:        2120 kB
PageTables:         1372 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      187324 kB
Committed_AS:     173552 kB
VmallocTotal:     516096 kB
VmallocUsed:       19928 kB
VmallocChunk:     491452 kB


The problem isn’t a too small heap size, but that the JVM isn’t able to use 46mb of RAM.

You should be able to add arguments be editing “WPILIB_DIR/java/current/ant/robotCommand”. I’m pretty sure this file is uploaded every time code is deployed. Also, keep in mind that this file will be overwritten when WPILib is updated.

Jared said heap, not stack. Memory is allocated from the heap.

[edit] I see you corrected that…
*
*

I can confirm that increasing the heap with -Xmx does not allow the user to allocate more memory. I did confirm that setting the maximum heap to a smaller value causes a different error

Unhandled exception: java.lang.OutOfMemoryError: Java heap space at [org.usfirst.frc.team8330.robot.Robot.disabledPeriodic(Robot.java:45), edu.wpi.first.wpilibj.IterativeRobot.startCompetition(IterativeRobot.java:102), edu.wpi.first.wpilibj.RobotBase.main(RobotBase.java:234)]

Note that is it difficult to get a number for free memory with Linux. The kernel will increase the amount of memory it uses for file and network I/O when physical pages are available. But as more pages are requested from user space the kernel will back off and release memory.

Also, in user space, Linux will allow one to allocate MUCH more memory than actually exists. This is called “lazy allocation”. The maps between the virtual pages allocated and the physical pages are only made when you write to a page (page by page).

HTH

Note that you do not have to worry much about stacks space for user programs in Linux. Your threads have huge allocated virtual stack spaces (like 2MB) and the required physical pages for the stack are committed as needed. So, in effect, your stacks start at 4K (the size of one page) and grow into the size required (within reason).

HTH