Re: [PATCH 2/3] kvm hypervisor : Add hypercalls to support pv-ticketlock

From: Jeremy Fitzhardinge
Date: Thu Jan 20 2011 - 12:56:47 EST


On 01/20/2011 03:59 AM, Srivatsa Vaddagiri wrote:
>> At least in the Xen code, a current owner isn't very useful, because we
>> need the current owner to kick the *next* owner to life at release time,
>> which we can't do without some structure recording which ticket belongs
>> to which cpu.
> If we had a yield-to [1] sort of interface _and_ information on which vcpu
> owns a lock, then lock-spinners can yield-to the owning vcpu, while the
> unlocking vcpu can yield-to the next-vcpu-in-waiting.

Perhaps, but the core problem is how to find "next-vcpu-in-waiting"
efficiently. Once you have that info, there's a number of things you
can usefully do with it.

> The key here is not to
> sleep when waiting for locks (as implemented by current patch-series, which can
> put other VMs at an advantage by giving them more time than they are entitled
> to)

Why? If a VCPU can't make progress because its waiting for some
resource, then why not schedule something else instead? Presumably when
the VCPU does become runnable, the scheduler will credit its previous
blocked state and let it run in preference to something else.

> and also to ensure that lock-owner as well as the next-in-line lock-owner
> are not unduly made to wait for cpu.
>
> Is there a way we can dynamically expand the size of lock only upon contention
> to include additional information like owning vcpu? Have the lock point to a
> per-cpu area upon contention where additional details can be stored perhaps?

As soon as you add a pointer to the lock, you're increasing its size.
If we had a pointer in there already, then all of this would be moot.

If auxiliary per-lock is uncommon, then using a hash keyed on lock
address would be one way to do it.

J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/