Re: [PATCH v6 12/13] KVM: s390: add gib_alert_irq_handler()

From: Pierre Morel
Date: Wed Jan 30 2019 - 11:26:05 EST


On 29/01/2019 16:29, Michael Mueller wrote:


On 29.01.19 14:26, Halil Pasic wrote:
On Thu, 24 Jan 2019 13:59:38 +0100
Michael Mueller <mimu@xxxxxxxxxxxxx> wrote:

The patch implements a handler for GIB alert interruptions
on the host. Its task is to alert guests that interrupts are
pending for them.

A GIB alert interrupt statistic counter is added as well:

$ cat /proc/interrupts
ÂÂÂÂÂÂÂÂÂÂ CPU0ÂÂÂÂÂÂ CPU1
ÂÂ ...
ÂÂ GAL:ÂÂÂÂÂ 23ÂÂÂÂÂÂÂÂ 37ÂÂ [I/O] GIB Alert
ÂÂ ...

Signed-off-by: Michael Mueller <mimu@xxxxxxxxxxxxx>
[..]
+/**
+ * gisa_get_ipm_or_restore_iam - return IPM or restore GISA IAM
+ *
+ * @gi: gisa interrupt struct to work on
+ *
+ * Atomically restores the interruption alert mask if none of the
+ * relevant ISCs are pending and return the IPM.

The word 'relevant' probably reflects some previous state. It does not
bother me too much.

"relevant" refers to the ISCs handled by the GAL mechanism, i.e those
registered in the kvm->arch.gisa_int.alert.mask.


[..]

+static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
+{
+ÂÂÂ int vcpu_id, online_vcpus = atomic_read(&kvm->online_vcpus);
+ÂÂÂ struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
+ÂÂÂ struct kvm_vcpu *vcpu;
+
+ÂÂÂ for_each_set_bit(vcpu_id, kvm->arch.idle_mask, online_vcpus) {
+ÂÂÂÂÂÂÂ vcpu = kvm_get_vcpu(kvm, vcpu_id);
+ÂÂÂÂÂÂÂ if (psw_ioint_disabled(vcpu))
+ÂÂÂÂÂÂÂÂÂÂÂ continue;
+ÂÂÂÂÂÂÂ deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+ÂÂÂÂÂÂÂ if (deliverable_mask) {
+ÂÂÂÂÂÂÂÂÂÂÂ /* lately kicked but not yet running */

How about /* was kicked but didn't run yet */?

what is the difference here...


+ÂÂÂÂÂÂÂÂÂÂÂ if (test_and_set_bit(vcpu_id, gi->kicked_mask))
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return;
+ÂÂÂÂÂÂÂÂÂÂÂ kvm_s390_vcpu_wakeup(vcpu);
+ÂÂÂÂÂÂÂÂÂÂÂ return;
+ÂÂÂÂÂÂÂ }
+ÂÂÂ }
+}
+

[..]

+static void process_gib_alert_list(void)
+{
+ÂÂÂ struct kvm_s390_gisa_interrupt *gi;
+ÂÂÂ struct kvm_s390_gisa *gisa;
+ÂÂÂ struct kvm *kvm;
+ÂÂÂ u32 final, origin = 0UL;
+
+ÂÂÂ do {
+ÂÂÂÂÂÂÂ /*
+ÂÂÂÂÂÂÂÂ * If the NONE_GISA_ADDR is still stored in the alert list
+ÂÂÂÂÂÂÂÂ * origin, we will leave the outer loop. No further GISA has
+ÂÂÂÂÂÂÂÂ * been added to the alert list by millicode while processing
+ÂÂÂÂÂÂÂÂ * the current alert list.
+ÂÂÂÂÂÂÂÂ */
+ÂÂÂÂÂÂÂ final = (origin & NONE_GISA_ADDR);
+ÂÂÂÂÂÂÂ /*
+ÂÂÂÂÂÂÂÂ * Cut off the alert list and store the NONE_GISA_ADDR in the
+ÂÂÂÂÂÂÂÂ * alert list origin to avoid further GAL interruptions.
+ÂÂÂÂÂÂÂÂ * A new alert list can be build up by millicode in parallel
+ÂÂÂÂÂÂÂÂ * for guests not in the yet cut-off alert list. When in the
+ÂÂÂÂÂÂÂÂ * final loop, store the NULL_GISA_ADDR instead. This will re-
+ÂÂÂÂÂÂÂÂ * enable GAL interruptions on the host again.
+ÂÂÂÂÂÂÂÂ */
+ÂÂÂÂÂÂÂ origin = xchg(&gib->alert_list_origin,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ (!final) ? NONE_GISA_ADDR : NULL_GISA_ADDR);
+ÂÂÂÂÂÂÂ /*
+ÂÂÂÂÂÂÂÂ * Loop through the just cut-off alert list and start the
+ÂÂÂÂÂÂÂÂ * gisa timers to kick idle vcpus to consume the pending
+ÂÂÂÂÂÂÂÂ * interruptions asap.
+ÂÂÂÂÂÂÂÂ */
+ÂÂÂÂÂÂÂ while (origin & GISA_ADDR_MASK) {
+ÂÂÂÂÂÂÂÂÂÂÂ gisa = (struct kvm_s390_gisa *)(u64)origin;
+ÂÂÂÂÂÂÂÂÂÂÂ origin = gisa->next_alert;
+ÂÂÂÂÂÂÂÂÂÂÂ gisa->next_alert = (u32)(u64)gisa;
+ÂÂÂÂÂÂÂÂÂÂÂ kvm = container_of(gisa, struct sie_page2, gisa)->kvm;
+ÂÂÂÂÂÂÂÂÂÂÂ gi = &kvm->arch.gisa_int;
+ÂÂÂÂÂÂÂÂÂÂÂ if (hrtimer_active(&gi->timer))
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ hrtimer_cancel(&gi->timer);
+ÂÂÂÂÂÂÂÂÂÂÂ hrtimer_start(&gi->timer, 0, HRTIMER_MODE_REL);
+ÂÂÂÂÂÂÂ }
+ÂÂÂ } while (!final);
+
+}
+
 void kvm_s390_gisa_clear(struct kvm *kvm)
 {
ÂÂÂÂÂ struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
ÂÂÂÂÂ if (!gi->origin)
ÂÂÂÂÂÂÂÂÂ return;
-ÂÂÂ memset(gi->origin, 0, sizeof(struct kvm_s390_gisa));
-ÂÂÂ gi->origin->next_alert = (u32)(u64)gi->origin;
+ÂÂÂ gisa_clear_ipm(gi->origin);

This could be a separate patch. I would like little more explanation
to this.

I can break at out as I had before... ;)


ÂÂÂÂÂ VM_EVENT(kvm, 3, "gisa 0x%pK cleared", gi->origin);
 }
@@ -2940,13 +3078,25 @@ void kvm_s390_gisa_init(struct kvm *kvm)
ÂÂÂÂÂ gi->origin = &kvm->arch.sie_page2->gisa;
ÂÂÂÂÂ gi->alert.mask = 0;
ÂÂÂÂÂ spin_lock_init(&gi->alert.ref_lock);
-ÂÂÂ kvm_s390_gisa_clear(kvm);
+ÂÂÂ gi->expires = 50 * 1000; /* 50 usec */

I blindly trust your choice here ;)

You know I will increase it to 1 ms together with the change that I
proposed. (gisa_get_ipm_or_restore_iam() in kvm_s390_handle_wait()).

With this.
Reviewed-by: Pierre Morel<pmorel@xxxxxxxxxxxxx>



--
Pierre Morel
Linux/KVM/QEMU in BÃblingen - Germany