Re: [RFC] Using "page credits" as a solution for common thrashing scenarios
From: Eyal Lotem
Date: Tue Jun 08 2010 - 05:45:36 EST
Replying to a very old email :-)
On Wed, Nov 25, 2009 at 12:15 AM, Andi Kleen <andi@xxxxxxxxxxxxxx> wrote:
> Eyal Lotem <eyal.lotem@xxxxxxxxx> writes:
>
> Replying to an old email.
>
>> * I think it is wrong for the kernel to evict the 15 pages of the bash,
>> xterm, X server's working set, as an example, in order for a
>> misbehaving process to have 1000015 instead of 1000000 pages in its
>> working set. EVEN if that misbehaving process is accessing its working
>> set far more aggressively.
>
> One problem in practice tends to be that it's hard to realiably detect
> that a process is misbehaving. The 1000000 page process might be your
> critical database, while the 15 page process is something very
> unimportant.
Well, this solution doesn't really depend on any detection of
"misbehaving", it just goes about a more accurate way of defining page
importance. A simple solution to the problem you suggest is assigning
far more "credits" to the database than to the 15-page process.
Eyal
>
> -Andi
>
> --
> ak@xxxxxxxxxxxxxxx -- Speaking for myself only.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/