Re: Schedule affinity_notify work while migrating IRQs during hot plug
From: Sodagudi Prasad
Date: Fri Mar 17 2017 - 06:52:58 EST
On 2017-03-13 13:19, Thomas Gleixner wrote:
On Mon, 13 Mar 2017, Sodagudi Prasad wrote:
On 2017-02-27 09:21, Thomas Gleixner wrote:
> On Mon, 27 Feb 2017, Sodagudi Prasad wrote:
> > So I am thinking that, adding following sched_work() would notify clients.
>
> And break the world and some more.
>
> > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> > index 6b66959..5e4766b 100644
> > --- a/kernel/irq/manage.c
> > +++ b/kernel/irq/manage.c
> > @@ -207,6 +207,7 @@ int irq_do_set_affinity(struct irq_data *data, const
> > struct cpumask *mask,
> > case IRQ_SET_MASK_OK_DONE:
> > cpumask_copy(desc->irq_common_data.affinity, mask);
> > case IRQ_SET_MASK_OK_NOCOPY:
> > + schedule_work(&desc->affinity_notify->work);
> > irq_set_thread_affinity(desc);
> > ret = 0;
>
> You cannot do that unconditionally and just slap that schedule_work() call
> into the code. Aside of that schedule_work() would be invoked twice for all
> calls which come via irq_set_affinity_locked() ....
Hi Tglx,
Yes. I agree with you, schedule_work() gets invoked twice with
previous
change.
How about calling irq_set_notify_locked() instead of
irq_do_set_notify()?
Is this a quiz?
Can you actually see the difference between these functions? There is a
damned good reason WHY this calls irq_do_set_affinity().
Other option is that, adding an argument to irq_do_set_affinity() and
queue
work to notify when that new parameter set. I have attached patch for
the same.
I tested this change on arm64 bit platform and observed that clients
drivers
are getting notified during cpu hot plug.
Thanks,
tglx
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum,
Linux Foundation Collaborative ProjectFrom 54b8d5164126fbdf14d1a9586342b972a6eb5537 Mon Sep 17 00:00:00 2001
From: Prasad Sodagudi <psodagud@xxxxxxxxxxxxxx>
Date: Thu, 16 Mar 2017 23:44:44 -0700
Subject: [PATCH] genirq: Notify clients whenever there is change in affinity
During the cpu hotplug, irq are getting migrated from
hotplugging core but not getting notitfied to client
drivers. So add parameter to irq_do_set_affinity(),
to check and notify client drivers during the cpu hotplug.
Signed-off-by: Prasad Sodagudi <psodagud@xxxxxxxxxxxxxx>
---
kernel/irq/cpuhotplug.c | 2 +-
kernel/irq/internals.h | 2 +-
kernel/irq/manage.c | 9 ++++++---
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 011f8c4..e293d9b 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -38,7 +38,7 @@ static bool migrate_one_irq(struct irq_desc *desc)
if (!c->irq_set_affinity) {
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
} else {
- int r = irq_do_set_affinity(d, affinity, false);
+ int r = irq_do_set_affinity(d, affinity, false, true);
if (r)
pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n",
d->irq, r);
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index bc226e7..6abde48 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -114,7 +114,7 @@ static inline void unregister_handler_proc(unsigned int irq,
extern void irq_set_thread_affinity(struct irq_desc *desc);
extern int irq_do_set_affinity(struct irq_data *data,
- const struct cpumask *dest, bool force);
+ const struct cpumask *dest, bool force, bool notify);
/* Inline functions for support of irq chips on slow busses */
static inline void chip_bus_lock(struct irq_desc *desc)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index a4afe5c..aef8a96 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -197,7 +197,7 @@ static inline bool irq_move_pending(struct irq_data *data)
#endif
int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
- bool force)
+ bool force, bool notify)
{
struct irq_desc *desc = irq_data_to_desc(data);
struct irq_chip *chip = irq_data_get_irq_chip(data);
@@ -209,6 +209,9 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
case IRQ_SET_MASK_OK_DONE:
cpumask_copy(desc->irq_common_data.affinity, mask);
case IRQ_SET_MASK_OK_NOCOPY:
+ if (notify)
+ schedule_work(&desc->affinity_notify->work);
+
irq_set_thread_affinity(desc);
ret = 0;
}
@@ -227,7 +230,7 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
return -EINVAL;
if (irq_can_move_pcntxt(data)) {
- ret = irq_do_set_affinity(data, mask, force);
+ ret = irq_do_set_affinity(data, mask, force, false);
} else {
irqd_set_move_pending(data);
irq_copy_pending(desc, mask);
@@ -375,7 +378,7 @@ static int setup_affinity(struct irq_desc *desc, struct cpumask *mask)
if (cpumask_intersects(mask, nodemask))
cpumask_and(mask, mask, nodemask);
}
- irq_do_set_affinity(&desc->irq_data, mask, false);
+ irq_do_set_affinity(&desc->irq_data, mask, false, true);
return 0;
}
#else
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,\na Linux Foundation Collaborative Project