Re: [PATCH] x86/apic/vector: Move pr_warn() outside of vector_lock

From: Waiman Long
Date: Sun Mar 28 2021 - 20:49:08 EST


On 3/28/21 6:04 PM, Thomas Gleixner wrote:
Waiman,

On Sun, Mar 28 2021 at 15:58, Waiman Long wrote:
It was found that the following circular locking dependency warning
could happen in some systems:

[ 218.097878] ======================================================
[ 218.097879] WARNING: possible circular locking dependency detected
[ 218.097880] 4.18.0-228.el8.x86_64+debug #1 Not tainted
[ 218.097881] ------------------------------------------------------
[ 218.097882] systemd/1 is trying to acquire lock:
[ 218.097883] ffffffff84c27920 (console_owner){-.-.}, at: console_unlock+0x3fb/0x9f0
[ 218.097886]
[ 218.097887] but task is already holding lock:
[ 218.097888] ffffffff84afca78 (vector_lock){-.-.}, at: x86_vector_activate+0xca/0xab0
[ 218.097891]
[ 218.097892] which lock already depends on the new lock.
:
[ 218.097966] other info that might help us debug this:
[ 218.097967]
[ 218.097967] Chain exists of:
[ 218.097968] console_oc_lock_class --> vector_lock
[ 218.097972]
[ 218.097973] Possible unsafe locking scenario:
[ 218.097973]
[ 218.097974] CPU0 CPU1
[ 218.097975] ---- ----
[ 218.097975] lock(vector_lock);
[ 218.097977] lock(&irq_desc_lock_class);
[ 218.097980] lock(vector_lock);
[ 218.097981] lock(console_owner);
[ 218.097983]
[ 218.097984] *** DEADLOCK ***
can you please post the full lockdep output?

Will do.


This lockdep warning was causing by printing of the warning message:

[ 218.095152] irq 3: Affinity broken due to vector space exhaustion.

It looks that this warning message is relatively more common than
the other warnings in arch/x86/kernel/apic/vector.c. To avoid this
potential deadlock scenario, this patch moves all the pr_warn() calls
in the vector.c file outside of the vector_lock critical sections.
Definitely not.

-static int activate_reserved(struct irq_data *irqd)
+static int activate_reserved(struct irq_data *irqd, unsigned long flags,
+ bool *unlocked)
{
struct apic_chip_data *apicd = apic_chip_data(irqd);
int ret;
@@ -410,6 +411,8 @@ static int activate_reserved(struct irq_data *irqd)
*/
if (!cpumask_subset(irq_data_get_effective_affinity_mask(irqd),
irq_data_get_affinity_mask(irqd))) {
+ raw_spin_unlock_irqrestore(&vector_lock, flags);
+ *unlocked = true;
What?

pr_warn("irq %u: Affinity broken due to vector space exhaustion.\n",
irqd->irq);
}
@@ -446,6 +449,7 @@ static int x86_vector_activate(struct irq_domain *dom, struct irq_data *irqd,
{
struct apic_chip_data *apicd = apic_chip_data(irqd);
unsigned long flags;
+ bool unlocked = false;
int ret = 0;
trace_vector_activate(irqd->irq, apicd->is_managed,
@@ -459,8 +463,9 @@ static int x86_vector_activate(struct irq_domain *dom, struct irq_data *irqd,
else if (apicd->is_managed)
ret = activate_managed(irqd);
else if (apicd->has_reserved)
- ret = activate_reserved(irqd);
- raw_spin_unlock_irqrestore(&vector_lock, flags);
+ ret = activate_reserved(irqd, flags, &unlocked);
+ if (!unlocked)
+ raw_spin_unlock_irqrestore(&vector_lock, flags);
Even moar what?

return ret;
}
This turns that code into complete unreadable gunk. No way.

I am sorry that this part of the patch is sloppy. I will revise it to make it better.

Cheers,
Longman