Re: [PATCH 1/4, v2] x86: enlightenment for ticket spin locks - base implementation

From: Jan Beulich
Date: Wed Jun 30 2010 - 07:51:44 EST

>>> On 30.06.10 at 12:50, Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
> On 06/30/2010 11:11 AM, Peter Zijlstra wrote:
>>>> Uhm, I'd much rather see a single alternative implementation, not a
>>>> per-hypervisor lock implementation.
>>> How would you imaging this to work? I can't see how the mechanism
>>> could be hypervisor agnostic. Just look at the Xen implementation
>>> (patch 2) - do you really see room for meaningful abstraction there?
>> I tried not to, it made my eyes bleed..
>> But from what I hear all virt people are suffering from spinlocks (and
>> fair spinlocks in particular), so I was thinking it'd be a good idea to
>> get all interested parties to collaborate on one. Fragmentation like
>> this hardly ever works out well.
> Yes. Now that I've looked at it a bit more closely I think these
> patches put way too much logic into the per-hypervisor part of the code.

I fail to see that: Depending on the hypervisor's capabilities, the
two main functions could be much smaller (potentially there wouldn't
even be a need for the unlock hook in some cases), and hence I
continue to think that all code that is in xen.c indeed is non-generic
(while I won't say that there may not be a second hypervisor where
the code might look almost identical).

>> Ah, right, after looking a bit more at patch 2 I see you indeed
>> implement a ticket like lock. Although why you need both a ticket and a
>> FIFO list is beyond me.
> That appears to be a mechanism to allow it to take interrupts while
> spinning on the lock, which is something that stock ticket locks don't
> allow. If that's a useful thing to do, it should happen in the generic
> ticketlock code rather than in the per-hypervisor backend (otherwise we
> end up with all kinds of subtle differences in lock behaviour depending
> on the exact environment, which is just going to be messy). Even if
> interrupts-while-spinning isn't useful on native hardware, it is going
> to be equally applicable to all virtual environments.

While we do interrupt re-enabling in our pv kernels, I intentionally
didn't do this here - it complicates the code quite a bit further, and
that did seem right for an initial submission.

The list really juts is needed to not pointlessly tickle CPUs that
won't own the just released lock next anyway (or would own
it, but meanwhile went for another one where they also decided
to go into polling mode).


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at