Re: 2GB-Memory-Linux: Strange Behavior

Linus Torvalds (
Sat, 17 Jan 1998 10:49:33 -0800 (PST)

On Sat, 17 Jan 1998, Xintian Wu wrote:
> >
> > > (b) swap space seems to be totally forgot by the system.
> >
> > yes, since this is not physical memory shortage, but virtual memory
> > 'shortage'. We are hitting 32 bitness limits ...
> Are you saying it's impossible to run four 1.7GB processes since the
> memory sum exceeds 2GB?

No, he's saying that one single process is limited to a fraction of the
4GB address space, and that some of the ENOMEM errors are due to that.

One of the reasons for this is that the normal user space address space is
fairly spread out: the address space is divided into largish regions for
shared libraries, the stack and the actual executable and data areas. And
it is not done in the most space-compact fashion: under normal
circumstances you _want_ all the parts of the executable to be sparsely
laid out in memory so that wild pointers are more likely to be caught, and
it is more likely that you can allocate memory in some random area.

However, when you start using up more than 1GB or so of virtual address
space, such a sparse layout is no longer necessarily the best approach.
You might want to compile your binary statically to avoid having the
shared libraries etc taking up virtual memory space, for example.

And depending on what your memory usage pattern is you may want to disable
the normal malloc() behaviour of using mmap() for large allocations, and
instead always use the traditional "brk()" system call that will give you
a more compact virtual memory map (at the cost of some flexibility).

Essentially, the 2GB patches by Ingo limit your maximum single process
size to 2GB (rather than the default 3GB), and other normal allocation
issues then cut that 2GB down further. You can affect some of it a bit,
but 32 bits simply is not enough for certain large problems. That's why
most of the big players have already moved to 64 bits or are close to