On Wed, Jul 17, 2013 at 12:12:35AM +0530, Raghavendra K T wrote:Depends on your workload of course. To hit that you not only need to getI do not think it is very rare to get interrupt between
local_irq_restore() and halt() under load since any interrupt that
occurs between local_irq_save() and local_irq_restore() will be
delivered
immediately after local_irq_restore(). Of course the chance of no
other
random interrupt waking lock waiter is very low, but waiter can sleep
for much longer then needed and this will be noticeable in
performance.
Yes, I meant the entire thing. I did infact turned WARN on
w->lock==null before halt() [ though we can potentially have irq right
after that ], but did not hit so far.
interrupt in there, but the interrupt handler needs to take contended
spinlock.
How would this help if NMI takes lock in critical section. The thing
BTW can NMI handler take spinlocks? If it can what happens if NMI is
delivered in a section protected by local_irq_save()/local_irq_restore()?
Had another idea if NMI, halts are causing problem until I saw
PeterZ's reply similar to V2 of pvspinlock posted here:
https://lkml.org/lkml/2011/10/23/211
Instead of halt we started with a sleep hypercall in those
versions. Changed to halt() once Avi suggested to reuse existing sleep.
If we use older hypercall with few changes like below:
kvm_pv_wait_for_kick_op(flags, vcpu, w->lock )
{
// a0 reserved for flags
if (!w->lock)
return;
DEFINE_WAIT
...
end_wait
}
that may happen is that lock_waiting->want may have NMI lock value, but
lock_waiting->lock will point to non NMI lock. Setting of want and lock
have to be atomic.
kvm_pv_wait_for_kick_op() is incorrect in other ways. It will spuriously
return to a guest since not all events that wake up vcpu thread
correspond to work for guest to do.
Only question is how to retry immediately with lock_spinning inWithout compiling and checking myself the different between previous
w->lock=null cases.
/me need to experiment that again perhaps to see if we get some benefit.
Yes, this is not what I proposed.
So I am,
1. trying to artificially reproduce this.
2. I replaced the halt with below code,
if (arch_irqs_disabled())
halt();
and ran benchmarks.
But this results in degradation because, it means we again go back
and spin in irq enabled case.
True.
3. Now I am analyzing the performance overhead of safe_halt in irqUse of arch_irqs_disabled() is incorrect here.
enabled case.
if (arch_irqs_disabled())
halt();
else
safe_halt();
Oops! sill me.
If you are doing it beforelocal_irq_restore() it will always be false since you disabled interrupt
yourself,
This was not the case. but latter is the one I missed.
if you do it after then it is to late since interrupt can comebetween local_irq_restore() and halt() so enabling interrupt and halt
are still not atomic. You should drop local_irq_restore() and do
if (arch_irqs_disabled_flags(flags))
halt();
else
safe_halt();
instead.
Yes, I tested with below as suggested:
//local_irq_restore(flags);
/* halt until it's our turn and kicked. */
if (arch_irqs_disabled_flags(flags))
halt();
else
safe_halt();
//local_irq_save(flags);
I am seeing only a slight overhead, but want to give a full run to
check the performance.
code and this one should be a couple asm instruction. I would be
surprised if you can measure it especially as vcpu is going to halt
(and do expensive vmexit in the process) anyway.