Big malloc's.

John Carter (
Mon, 10 Feb 1997 13:44:51 +0200 (SAT)

Greetings Kernellers, (Kernelites, Kerns,...)

We have had the "I can crash ... by malloc'ing something huge and
touching every page" thread on this list a couple of times.

Now I want to ask the next step...

Suppose I want to move through a _lot_ of data quickly. A really big
file (up to several times the size of memory+swap). Typically I malloc() a
BIG array, read() the file into that array and off I go. If the array
is too big, I start swapping like mad. Slow.

Thus its better if I chop it into buffers. The question is, how big a
buffer. The smaller the buffer, the more read()'s and the more
fiddling with buffer boundary conditions I do. Too big a buffer and I
start swapping.

My question is then...

How do I write a program that will ask the kernel how much free
physical memory there is available to me without causing a lot of

Now there may be no free physical memory available as buffers and idle
processes may be hogging everything. Thus I need something that says
"swap out all idle processes, flush and forget 90% of the file
buffers, then tell me how much physcial memory I have to play around

Could I memory map the file directly? Would it be faster / better? If
I was modifying the file, could I tell the kernel, read untouched
pages from this file, write modified pages to that file?

Is there a better way to do this class of operation?

John Carter EMail:
Telephone : 27-12-808-0374x194 Fax:- 27-12-808-0338

Founder of the Council for Unnatural Scientists.