And shared libraries might be counted in both real and virtual memory for every process that uses them even though they’re taking up the same pages of read-only or copy-on-write memory, and memory mapped files might be counted in their entirety in virtual memory even if only a small section of the file ever gets read, and on Linux the Out-Of-Memory killer might choose to kill a *different* process when your process tries to get real access to overallocated virtual memory, and the C shared library might not *really* release the memory you free() because it’s faster to keep it around to avoid hitting up the kernel when you next try to malloc(), and none of this stuff is set in stone in a standard so for all I know anything or everything I’ve just recalled might be years out of date…
I hate this stuff. At least heap allocation all ends up going through a few bottleneck APIs, so you can get a vague handle on memory usage optimization with an LD_PRELOAD to intercept and tally those calls.
I was originally going to comment that it was disappointing when your blog post didn’t really answer the question posed in your blog title, especially compared to your usual detailed answers to these sorts of questions. But I guess that’s not at all your fault; the best answer we can get might really be: if nothing crashes later then the allocation must have succeeded in some sense? I assume this is why some embedded systems people end up trying to just keep everything on the stack or in static globals.
Thanks for the comment. I do not answer my own question because it is, as you remarked, non-trivial, at least if you rely on purely standard C.
Georg Nikodymsays:
In embedded:
– we rarely have a lot of memory and not all of it has the same properties
– standard C doesn’t have a way to interrogate your heap (or stack) utilization…
– other platform specific stuff (heck, you might not even have an allocator)
– debugging can be quite challenging
all contributing to the “odd” ways we write C code.
M.says:
Also in embedded we aren’t sharing the memory with other processes so it’s all ours from the get go.
Alexsays:
This actually has more to do with how “overcommit” is set up on the system, than with the difference between virtual and physical memory.
You may try the same on a Linux system with overcommit turned off. This is done with /proc/sys/vm/overcommit_memory unless I remember wrong. Or, try on a Windows system – Windows does not allow overcommit (but still uses the same virtual/physical memory design).
The VM subsystem knows perfectly well that you’re allocating more than it can deliver, despite the memory being virtual. Overcommit was originally allowed to ease porting of some older software to Linux. Today it may make sense if a process uses vast address space for IO – that is not backed up by actual memory pages – but in general, turning overcommit off is a good thing for development. Makes finding memory related issues a lot quicker.
Raivokas Ripulisays:
This is operating system issue. Linux has 3 overcommit modes, heuristic (default), always, and never.
Interestingly, it says “Obvious overcommits of address space are refused.” Does it mean we can observe malloc failure in this mode?
Alexander Adlersays:
I tried on my box: If /proc/sys/vm/overcommit_memory is zero, the process exits cleanly with “error!”; if it is one, the process is terminated by the OOM killer after some time (my Laptop does not have 1T 😉
Normally, I have /proc/sys/vm/overcommit_memory set to zero. As the other Alex said, it is convenient for developing.
Jakubsays:
Recommend taking a look here for anyone who is interested in some more details about overcommit and OOM killer.
And shared libraries might be counted in both real and virtual memory for every process that uses them even though they’re taking up the same pages of read-only or copy-on-write memory, and memory mapped files might be counted in their entirety in virtual memory even if only a small section of the file ever gets read, and on Linux the Out-Of-Memory killer might choose to kill a *different* process when your process tries to get real access to overallocated virtual memory, and the C shared library might not *really* release the memory you free() because it’s faster to keep it around to avoid hitting up the kernel when you next try to malloc(), and none of this stuff is set in stone in a standard so for all I know anything or everything I’ve just recalled might be years out of date…
I hate this stuff. At least heap allocation all ends up going through a few bottleneck APIs, so you can get a vague handle on memory usage optimization with an LD_PRELOAD to intercept and tally those calls.
I was originally going to comment that it was disappointing when your blog post didn’t really answer the question posed in your blog title, especially compared to your usual detailed answers to these sorts of questions. But I guess that’s not at all your fault; the best answer we can get might really be: if nothing crashes later then the allocation must have succeeded in some sense? I assume this is why some embedded systems people end up trying to just keep everything on the stack or in static globals.
Thanks for the comment. I do not answer my own question because it is, as you remarked, non-trivial, at least if you rely on purely standard C.
In embedded:
– we rarely have a lot of memory and not all of it has the same properties
– standard C doesn’t have a way to interrogate your heap (or stack) utilization…
– other platform specific stuff (heck, you might not even have an allocator)
– debugging can be quite challenging
all contributing to the “odd” ways we write C code.
Also in embedded we aren’t sharing the memory with other processes so it’s all ours from the get go.
This actually has more to do with how “overcommit” is set up on the system, than with the difference between virtual and physical memory.
You may try the same on a Linux system with overcommit turned off. This is done with /proc/sys/vm/overcommit_memory unless I remember wrong. Or, try on a Windows system – Windows does not allow overcommit (but still uses the same virtual/physical memory design).
The VM subsystem knows perfectly well that you’re allocating more than it can deliver, despite the memory being virtual. Overcommit was originally allowed to ease porting of some older software to Linux. Today it may make sense if a process uses vast address space for IO – that is not backed up by actual memory pages – but in general, turning overcommit off is a good thing for development. Makes finding memory related issues a lot quicker.
This is operating system issue. Linux has 3 overcommit modes, heuristic (default), always, and never.
https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
Interestingly, it says “Obvious overcommits of address space are refused.” Does it mean we can observe malloc failure in this mode?
I tried on my box: If /proc/sys/vm/overcommit_memory is zero, the process exits cleanly with “error!”; if it is one, the process is terminated by the OOM killer after some time (my Laptop does not have 1T 😉
Normally, I have /proc/sys/vm/overcommit_memory set to zero. As the other Alex said, it is convenient for developing.
Recommend taking a look here for anyone who is interested in some more details about overcommit and OOM killer.
Link got removed, pasting here again: https://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6