Yes, we also stumbled across that one. ;-) And the answer was forcing
processes marked as "unswappable" (e.g. the swapper and init) to have
their priority boosted to GFP_ATOMIC when requesting free pages. This
is still an optimistic solution, as the kernel can run out of free
pages, but we saw that by increasing the MIN_FREE_PAGES by 50%
we have never seen deadlocks again. Just to be safe, we increased
it a little further.
The tests that it stood up were *really*evil*! ;-) We made a machine
with 16Mb of memory fill up to 40Mb and then we called swapoff.
This causes severe memory shortage and process killing and we had
no problems with deadlocks again.
> :-). You would have to pagelock whole process with its libraries.
> And libc is much too big, these days.
But the absolute minimum of libc required to page out some pages
is quite small and with the above mentioned patch it fits
in the MIN_FREE_PAGES. Just be optimistic and tune it
as necessary! ;-)
> Hmm - I think that it is ok to assume that swapper can not fail.
> My current problem now is deadlock described above. If we solve that,
> linux will have nice, working swapping over network.
Hmm... Are you sure? What if the server gets unplugged and the kernel
requests one page from it? Won't it hang for ever?
-- Jose Orlando Pereira * jop@di.uminho.pt * http://gsd.di.uminho.pt/~jop *