Re: x86: Is there still value in having a special tlb flush IPI vector?

From: Andi Kleen
Date: Tue Jul 29 2008 - 10:58:30 EST


On Tue, Jul 29, 2008 at 07:46:32AM -0700, Jeremy Fitzhardinge wrote:
> Andi Kleen wrote:
> >>Yes, but it's only held briefly to put things onto the list. It doesn't
> >>get held over the whole IPI transaction as the old smp_call_function
> >>did, and the tlb flush code still does. RCU is used to manage the list
> >>walk and freeing, so there's no long-held locks there either.
> >>
> >
> >If it bounces regularly it will still hurt.
> >
>
> We could convert smp_call_function_mask to use a multi-vector scheme
> like tlb_64.c if that turns out to be an issue.

Converting it first would be fine. Or rather in parallel because
you would need to reuse the TLB vectors (there are not that many
free)

But waiting first for a report would seem wrong to me.

I can just see some poor performance person spend a lot of work to track
down such a regression. While there's a lot of development manpower available
for Linux there's still no reason to waste i. I think if you want to change
such performance critical paths you should make sure the new code is roughly
performance equivalent first. And with the global lock I don't see that
at all.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/