>From mingo/UNIX (mingo@pc5829.hil.siemens.co.at)
Date: 24/04/96 15:09
> trying to estimate the memory accounting overhead for Linux:
>
> is this formula right?:
> N: number of i386 tasks (processes)
> MEM: average size of a task, in pages
> a page table entry is 32 bits = 4 bytes
>
> the size of the page tables (hardware page tables only):
> N*MEM*4*FACTOR,
>
> where FACTOR is a near 1.0 multiplier, because of the root page table.
> lets take 10 heavy tasks, each with 4MBytes of virtual memory:
> This would be 10*1024*4 = 10 pages .... something is wrong here :) Where
> are those big page tables you are talking about? [my fault most
> probably]
You chose a nice value in your calculations, a 4MB range is the max
that can be addressed with just one level-2 page table!
On a i386, _every_ process will have it's own page directory (level-1
page table), and a number of it's own page tables (level-2 tables) -
there are no 'middle' tables as one some systems.
My Linux box certainly runs more than 10 processes, more like 20. So,
the _minimum_ that is being used for page tables by user processes
is 20*2 = 40 pages (160KB). Ok, thats not v. large, but some of those
processes spent alot of time asleep (hours maybe). Why should they be
allowed to hold on to a valuable resource which they are not using?
Linux's algorithm for deciding what to page out is _not_ 'traditional'
UNI*X - but it works reasonably well. To get Level-2 tables paging
would require a move towards the tradition scheme of; page reference
bits, page ageing,...which I believe to be well worthwhile.
BTW, while not a good idea to do, can Level-1 page tables be paged?
Is it just a matter of setting the PDBR to point towards a 'bad'
page directory?
markhe