Re: [PATCH RFC V10 15/18] kvm : Paravirtual ticketlocks support forlinux guests running on KVM hypervisor

From: Raghavendra K T
Date: Wed Jul 17 2013 - 11:26:46 EST


On 07/17/2013 08:25 PM, Raghavendra K T wrote:
On 07/17/2013 08:14 PM, Gleb Natapov wrote:
On Wed, Jul 17, 2013 at 07:43:01PM +0530, Raghavendra K T wrote:
On 07/17/2013 06:55 PM, Gleb Natapov wrote:
On Wed, Jul 17, 2013 at 06:25:05PM +0530, Raghavendra K T wrote:
On 07/17/2013 06:15 PM, Gleb Natapov wrote:
On Wed, Jul 17, 2013 at 03:35:37PM +0530, Raghavendra K T wrote:
Instead of halt we started with a sleep hypercall in those
versions. Changed to halt() once Avi suggested to reuse
existing sleep.

If we use older hypercall with few changes like below:

kvm_pv_wait_for_kick_op(flags, vcpu, w->lock )
{
// a0 reserved for flags
if (!w->lock)
return;
DEFINE_WAIT
...
end_wait
}

How would this help if NMI takes lock in critical section. The
thing
that may happen is that lock_waiting->want may have NMI lock
value, but
lock_waiting->lock will point to non NMI lock. Setting of want
and lock
have to be atomic.

True. so we are here

non NMI lock(a)
w->lock = NULL;
smp_wmb();
w->want = want;
NMI
<---------------------
NMI lock(b)
w->lock = NULL;
smp_wmb();
w->want = want;
smp_wmb();
w->lock = lock;
---------------------->
smp_wmb();
w->lock = lock;

so how about fixing like this?

again:
w->lock = NULL;
smp_wmb();
w->want = want;
smp_wmb();
w->lock = lock;

if (!lock || w->want != want) goto again;

NMI can happen after the if() but before halt and the same situation
we are trying to prevent with IRQs will occur.

True, we can not fix that. I thought to fix the inconsistency of
lock,want pair.
But NMI could happen after the first OR condition also.
/me thinks again

lock_spinning() can check that it is called in nmi context and bail
out.

Good point.
I think we can check for even irq context and bailout so that in irq
context we continue spinning instead of slowpath. no ?

That will happen much more often and irq context is no a problem anyway.


Yes. It is not a problem. But my idea was to not to enter slowpath lock
during irq processing. Do you think that is a good idea?

I 'll now experiment how often we enter slowpath in irq context.


With dbench 1.5x run, on my 32cpu / 16core sandybridge, I saw
around 10 spinlock slowpath entered from the irq context.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/