Re: Question about x86/mm/gup.c's use of disabled interrupts

From: Avi Kivity
Date: Wed Mar 18 2009 - 19:04:58 EST


Jeremy Fitzhardinge wrote:
Avi Kivity wrote:
Hm, awkward if flush_tlb_others doesn't IPI...


How can it avoid flushing the tlb on cpu [01]? It's it's gup_fast()ing a pte, it may as well load it into the tlb.

xen_flush_tlb_others uses a hypercall rather than an IPI, so none of the logic which depends on there being an IPI will work.

Right, of course, that's what we were talking about. I thought optimizations to avoid IPIs if an mm never visited a cpu.


Simplest fix is to make gup_get_pte() a pvop, but that does seem like putting a red flag in front of an inner-loop hotspot, or something...

The per-cpu tlb-flush exclusion flag might really be the way to go.

I don't see how it will work, without changing Xen to look at the flag?

local_irq_disable() is used here to lock out a remote cpu, I don't see why deferring the flush helps.

Well, no, not deferring. Making xen_flush_tlb_others() spin waiting for "doing_gup" to clear on the target cpu. Or add an explicit notion of a "pte update barrier" rather than implicitly relying on the tlb IPI (which is extremely convenient when available...).

Pick up a percpu flag from all cpus and spin on each? Nasty.

You could use the irq enabled flag; it's available and what native spins on (but also means I'll need to add one if I implement this).

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/