Re: [PATCH v5 13/15] KVM: s390: add function process_gib_alert_list()

From: Pierre Morel
Date: Wed Jan 09 2019 - 09:50:08 EST


On 09/01/2019 14:10, Halil Pasic wrote:
On Wed, 9 Jan 2019 13:14:17 +0100
Pierre Morel <pmorel@xxxxxxxxxxxxx> wrote:

On 08/01/2019 16:21, Michael Mueller wrote:


On 08.01.19 13:59, Halil Pasic wrote:
On Wed, 19 Dec 2018 20:17:54 +0100
Michael Mueller <mimu@xxxxxxxxxxxxx> wrote:

This function processes the Gib Alert List (GAL). It is required
to run when either a gib alert interruption has been received or
a gisa that is in the alert list is cleared or dropped.

The GAL is build up by millicode, when the respective ISC bit is
set in the Interruption Alert Mask (IAM) and an interruption of
that class is observed.

Signed-off-by: Michael Mueller <mimu@xxxxxxxxxxxxx>
---
 arch/s390/kvm/interrupt.c | 140
++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 140 insertions(+)

diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 48a93f5e5333..03e7ba4f215a 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -2941,6 +2941,146 @@ int kvm_s390_get_irq_state(struct kvm_vcpu
*vcpu, __u8 __user *buf, int len)
ÂÂÂÂÂ return n;
 }
+static int __try_airqs_kick(struct kvm *kvm, u8 ipm)
+{
+ÂÂÂ struct kvm_s390_float_interrupt *fi = &kvm->arch.float_int;
+ÂÂÂ struct kvm_vcpu *vcpu = NULL, *kick_vcpu[MAX_ISC + 1];
+ÂÂÂ int online_vcpus = atomic_read(&kvm->online_vcpus);
+ÂÂÂ u8 ioint_mask, isc_mask, kick_mask = 0x00;
+ÂÂÂ int vcpu_id, kicked = 0;
+
+ÂÂÂ /* Loop over vcpus in WAIT state. */
+ÂÂÂ for (vcpu_id = find_first_bit(fi->idle_mask, online_vcpus);
+ÂÂÂÂÂÂÂÂ /* Until all pending ISCs have a vcpu open for airqs. */
+ÂÂÂÂÂÂÂÂ (~kick_mask & ipm) && vcpu_id < online_vcpus;
+ÂÂÂÂÂÂÂÂ vcpu_id = find_next_bit(fi->idle_mask, online_vcpus,
vcpu_id)) {
+ÂÂÂÂÂÂÂ vcpu = kvm_get_vcpu(kvm, vcpu_id);
+ÂÂÂÂÂÂÂ if (psw_ioint_disabled(vcpu))
+ÂÂÂÂÂÂÂÂÂÂÂ continue;
+ÂÂÂÂÂÂÂ ioint_mask = (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+ÂÂÂÂÂÂÂ for (isc_mask = 0x80; isc_mask; isc_mask >>= 1) {
+ÂÂÂÂÂÂÂÂÂÂÂ /* ISC pending in IPM ? */
+ÂÂÂÂÂÂÂÂÂÂÂ if (!(ipm & isc_mask))
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ continue;
+ÂÂÂÂÂÂÂÂÂÂÂ /* vcpu for this ISC already found ? */
+ÂÂÂÂÂÂÂÂÂÂÂ if (kick_mask & isc_mask)
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ continue;
+ÂÂÂÂÂÂÂÂÂÂÂ /* vcpu open for airq of this ISC ? */
+ÂÂÂÂÂÂÂÂÂÂÂ if (!(ioint_mask & isc_mask))
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ continue;
+ÂÂÂÂÂÂÂÂÂÂÂ /* use this vcpu (for all ISCs in ioint_mask) */
+ÂÂÂÂÂÂÂÂÂÂÂ kick_mask |= ioint_mask;
+ÂÂÂÂÂÂÂÂÂÂÂ kick_vcpu[kicked++] = vcpu;

Assuming that the vcpu can/will take all ISCs it's currently open for
does not seem right. We kind of rely on this assumption here, or?

why does it not seem right?


When an interrupt is delivered a psw-swap takes place. The new-psw
may fence IO interrupts. Thus for example if we have the vcpu open for
all ISCs and 0, 1 and 2 pending, we may end up only delivering 0, if the
psw-swap corresponding to delivering 0 closes the vcpu for IO
interrupts. After guest has control, we don't have control over the rest
of the story.

OK I think I understand your concern, waking up a single waiting vCPU per ISC is not enough.
We must wake all vCPU in wait state having at least one matching ISC bit.

Regards,
Pierre


--
Pierre Morel
Linux/KVM/QEMU in BÃblingen - Germany