Re: [RFC PATCH 0/4] Gang scheduling in CFS

From: Ingo Molnar
Date: Mon Feb 20 2012 - 03:14:35 EST



* Nikunj A Dadhania <nikunj@xxxxxxxxxxxxxxxxxx> wrote:

> > Here it would massively improve performance - without
> > regressing the scheduler code massively.
>
> I tried doing an experiment with the flush_tlb_others_ipi.
> This depends on Raghu's "kvm : Paravirt-spinlock support for
> KVM guests" (https://lkml.org/lkml/2012/1/14/66), which has
> new hypercall for kicking another vcpu out of halt.
>
> Here are the results from non-PLE hardware. Running ebizzy
> workload inside the VMs. The table shows the ebizzy score -
> Records/sec.
>
> 8CPU Intel Xeon, HT disabled, 64 bit VM(8vcpu, 1G RAM)
>
> +--------+------------+------------+-------------+
> | | baseline | gang | pv_flush |
> +--------+------------+------------+-------------+
> | 2VM | 3979.50 | 8818.00 | 11002.50 |
> | 4VM | 1817.50 | 6236.50 | 6196.75 |
> | 8VM | 922.12 | 4043.00 | 4001.38 |
> +--------+------------+------------+-------------+

Very nice results!

Seems like the PV approach is massively faster on 2 VMs than
even the gang scheduling hack, because it attacks the problem
at its root, not just the symptom.

The patch is also an order of magnitude simpler. Gang
scheduling, R.I.P.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/