On Thu, Sep 27, 2012 at 4:50 AM, Avi Kivity <avi@xxxxxxxxxx
<mailto:avi@xxxxxxxxxx>> wrote:
On 09/25/2012 04:43 PM, Jiannan Ouyang wrote:
> I've actually implemented this preempted_bitmap idea.
Interesting, please share the code if you can.
> However, I'm doing this to expose this information to the guest,
so the
> guest is able to know if the lock holder is preempted or not before
> spining. Right now, I'm doing experiment to show that this idea
works.
>
> I'm wondering what do you guys think of the relationship between the
> pv_ticketlock approach and PLE handler approach. Are we going to
adopt
> PLE instead of the pv ticketlock, and why?
Right now we're searching for the best solution. The tradeoffs are more
or less:
PLE:
- works for unmodified / non-Linux guests
- works for all types of spins (e.g. smp_call_function*())
- utilizes an existing hardware interface (PAUSE instruction) so likely
more robust compared to a software interface
PV:
- has more information, so it can perform better
Given these tradeoffs, if we can get PLE to work for moderate amounts of
overcommit then I'll prefer it (even if it slightly underperforms PV).
If we are unable to make it work well, then we'll have to add PV.
--
error compiling committee.c: too many arguments to function
FYI. The preempted_bitmap patch.
I delete some unrelated code in the generated patch file and seems
broken the patch file format... I hope anyone could teach me some
solutions.
However, it's pretty straight forward, four things: declaration,
initialization, set and clear. I think you guys can figure it out easily!
As Avi sugguested, you could check task state TASK_RUNNING in sched_out.
Signed-off-by: Jiannan Ouyang <ouyang@xxxxxxxxxxx
<mailto:ouyang@xxxxxxxxxxx>>
diff --git a/arch/x86/include/asm/
paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8613cbb..4fcb648 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -73,6 +73,16 @@ struct pv_info {
const char *name;
};
+struct pv_sched_info {
+ unsigned long sched_bitmap;
+} __attribute__((__packed__));
+
struct pv_init_ops {
/*
* Patch may replace one of the defined code sequences with
diff --git a/arch/x86/kernel/paravirt-spinlocks.c
b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..2242d22 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
+struct pv_sched_info pv_sched_info = {
+ .sched_bitmap = (unsigned long)-1,
+};
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 44ee712..3eb277e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -494,6 +494,11 @@ static struct kvm *kvm_create_vm(unsigned long
type)
mutex_init(&kvm->slots_lock);
atomic_set(&kvm->users_count, 1);
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+ kvm->pv_sched_info.sched_bitmap = (unsigned long)-1;
+#endif
+
r = kvm_init_mmu_notifier(kvm);
if (r)
goto out_err;
@@ -2697,7 +2702,13 @@ struct kvm_vcpu
*preempt_notifier_to_vcpu(struct preempt_notifier *pn)
static void kvm_sched_in(struct preempt_notifier *pn, int cpu)
{
struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
+ set_bit(vcpu->vcpu_id, &vcpu->kvm->pv_sched_info.sched_bitmap);
kvm_arch_vcpu_load(vcpu, cpu);
}
@@ -2705,7 +2716,13 @@ static void kvm_sched_out(struct
preempt_notifier *pn,
struct task_struct *next)
{
struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
+ clear_bit(vcpu->vcpu_id,
&vcpu->kvm->pv_sched_info.sched_bitmap);
kvm_arch_vcpu_put(vcpu);
}