Re: page tables

Ingo Molnar (mingo@pc5829.hil.siemens.co.at)
Thu, 25 Apr 1996 14:24:57 +0200 (MET DST)


On Thu, 25 Apr 1996 Hemment_Mark/HEMEL_IOPS_INTERNATIONAL-SALES@uniplex.co.uk wrote:

> >From mingo/UNIX (mingo@pc5829.hil.siemens.co.at)
> Date: 24/04/96 15:09
>
> > trying to estimate the memory accounting overhead for Linux:
> >
> > is this formula right?:
> > N: number of i386 tasks (processes)
> > MEM: average size of a task, in pages
> > a page table entry is 32 bits = 4 bytes
> >
> > the size of the page tables (hardware page tables only):
> > N*MEM*4*FACTOR,
> >
> > where FACTOR is a near 1.0 multiplier, because of the root page table.
> > lets take 10 heavy tasks, each with 4MBytes of virtual memory:
> > This would be 10*1024*4 = 10 pages .... something is wrong here :) Where
> > are those big page tables you are talking about? [my fault most
> > probably]
>
> You chose a nice value in your calculations, a 4MB range is the max
> that can be addressed with just one level-2 page table!

>
> On a i386, _every_ process will have it's own page directory (level-1
> page table), and a number of it's own page tables (level-2 tables) -
> there are no 'middle' tables as one some systems.
> My Linux box certainly runs more than 10 processes, more like 20. So,

ok, i wasnt clear enough. What i really wanted to estimate was 10*4=40M
total virtal memory, a realistic value. And the size of the page tables
scales linearly with the size of the total virtual memory -- apart from
the fact you mention, that each process has one root page table, and
"secondary" page tables (called page tables), with "real" virtual memory
entries, 32 bit each. I tried to estimate this value.

The formula should read:

hardware page pages = ceil(MEM*4/4096)*N + N*1

which gives 40 pages for 20 processes of a size of 4M

but since at least 2 hardware page tables are allocated per process, for
low memory processes this adds up to 80 pages (not counting VM_CLONE-ed
kernel thread processes). But with increasing virtual memory, this scales
with one page per 4M virtual memory => not significant.

Hmm, one interesting thing. The upper limit of hardware page tables can
be calculated too:

M is the size of Swap + main memory, in pages
N is the number of processes

tables = ceil(M*4/4096)*N + N

for a system with 48M main+swap, this is 13*N ... cool number! :))

this adds up to a >maximum< of 5MByte page tables for 100 processes. But
this really is a maximum ... you will never see it.

So i tend to believe that this issue is nonexistant for the i386 platform?

> the _minimum_ that is being used for page tables by user processes
> is 20*2 = 40 pages (160KB). Ok, thats not v. large, but some of those
> processes spent alot of time asleep (hours maybe). Why should they be
> allowed to hold on to a valuable resource which they are not using?

160KByte ... unless there is an elegant way to do it without loosing
cycles in the page fault handler ... no way! :) I would rather use my
shadow memory then :))))

> Linux's algorithm for deciding what to page out is _not_ 'traditional'
> UNI*X - but it works reasonably well. To get Level-2 tables paging
> would require a move towards the tradition scheme of; page reference
> bits, page ageing,...which I believe to be well worthwhile.

when, then if a process gets swapped out totally. But i'm not sure if
it's worth it.

> BTW, while not a good idea to do, can Level-1 page tables be paged?
> Is it just a matter of setting the PDBR to point towards a 'bad'
> page directory?

yup you can.

-- mingo