Memory intensive processes

William Burrow (
Tue, 10 Dec 1996 17:54:02 -0400 (AST)

On Tue, 10 Dec 1996, Richard B. Johnson wrote:

> Amongst other things, the VAX has a "modfied page writer". It works like
> this. When a process is allocated memory, the initial memory comes from
> a pool of shared zero-filled pages. These pages don't actually get
> owned by a specific process until a process actually writes to one.
> At this time, the modified page is taken out of the pool and becomes
> a permanent part of the processes' working set. However, the modified
> page list resides in memory until it is necessary to copy them to disk
> because real memory is getting scarce. At this time, the modified page
> writer copies the oldest and least-used pages to disk. The real memory,
> thus freed, is then zero-filled and put back into the pool. This process
> continues.

Hmm, I read about this scheme somewhere else, though not in relation to
VAX/VMS (and quite recently too). Has this been implemented in Linux?
Is somebody planning to implement it? Was it you who wrote me about this
before??? Deja vu on this.

Consider you have a process with a very large set of matrices. Most of
these could be sparse (eg mostly zeros). The IEEE representation of
floating point zeroes is all zeroes. Therefore, the scheme you mention
could in fact suitably represent in a single page a large chunk of memory
that would otherwise be wasted (filled with zeroes). This alone could get
some of the large process blues off of Linux' back.

> VAX/VMS has quotas on just about everything. The maximum working-set
> size, i.e., the maximum virtual pages that a process can own, is
> set via AUTHORIZE. Further, SYSGEN parameters also set sizes system-
> wide.

I once heard a joke that VMS would log when a user sneezed. Most
Unixheads don't seem to like VMS all that much, though it had some
very good ideas.

> Modern operating systems generally throw hardware at the problem. If
> your processes need more RAM, buy more RAM, etc.

Somewhat obvious. Unfortunately, there are people contemplating systems
with gigs of RAM and a similarly sized swap space. I don't see this
situation being a success story, as most of that swap is intended to be
filled with data to be processed.

{excellent ideas elided for brevity}

> Now, what this does is help prevent a runaway task from taking all the
> system resources. If your task is a memory hog, it gets slowed down
> by this allocation strategy while other tasks end up using CPU time
> stolen from the memory hog.

So, that would be half an answer anyway. I don't see how the kernel can
do much about how a process decides to access memory. Gnuchess is
particularly bad on memory restricted systems (that guy with the 8meg RAM
386 ought to give it a shot to see what this problem is about). The days
of assuming infinite, high speed memory are slowly moving away also, as
CPUs ramp to faster clocks and depend more on cache memory.

William Burrow  --  Fredericton Area Network, New Brunswick, Canada
Copyright 1996 William Burrow  
Canada's federal regulator says it may regulate content on the Internet to
provide for more Canadian content.   (Ottawa Citizen 15 Nov 96 D15)