> > why are they hard? We currently pretty blindly walk a process's VM to find
> > (single) swappable pages, and kick the swapout casually. To 'be aware' of
> > proper 8k physical pages we have to do something like this:
>
> But we'd like to be able to get larger areas than just 8kB, right?
i'd do something like this to satisfy higher-order goals:
if (makes_situation_better(this_candidate_page,order))
swap_out(this_candidate_page);
where 'makes_situation_better()' gives higher points if this given page
fills out 'the last spot' in an order-sized bitmap-block of the
buddy-bitmap. It gives higher points too if lower-order lists are not
properly populated yet. (ie. we'd have less or zero chance to create a new
N-th order free page). But it will give lower points if this swapout
populates an already properly (or over-) populated list.
This way we get a kind of percolation model, where lower-order pages grow
randomly, actually reaching higher-order sooner or later. It can be shown
that with this model we constrain the actual lower-order goal. (it means
that when we try to keep a higher-order freelist populated, this will
generate lower-order freelist elements no matter what we try. We can tune
this by doing more iterations vs. keeping more lower-order pages.).
Lower-order allocations have to be well-aware of the fact that we have
missed a higher-order goal, and should generate more lower-order pages
instead of 'stealing away' lower-order pages. This again is very
straightforward, every component has to know about the whole goal-vector,
and higher order's effect on lower orders.
-- mingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html