Re: #tj-percpu has been rebased

From: Rusty Russell
Date: Wed Feb 18 2009 - 02:11:35 EST


On Wednesday 18 February 2009 17:10:20 H. Peter Anvin wrote:
> Rusty Russell wrote:
> >>>
> >> num_possible_cpus() can be very large though, so in many cases the
> >> likelihood of finding that many pages approach zero. Furthermore,
> >> num_possible_cpus() may be quite a bit larger than the actual number of
> >> CPUs in the system.
> >
> > Sure, so we end up at vmalloc. No worse, but simpler and much better if we
> > *can* do it.
>
> If the likelihood is near zero, then you're wasting opportunities to do
> it better. If we have compact per-cpu virtual areas then we can use
> large pages if we know we'll have large percpu areas.

You're right; we'd need that defrag wonderness people keep speculating about.

What finally convinced me is that the per-cpu chunks have to be at least the
size of the .data.percpu section (24k here). 7*num_possible_cpus() is even
worse.

Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/