[PATCH RFC 01/12] irq_work: Add support to detect if work is pending
From: Joel Fernandes (Google)
Date: Sat Aug 15 2020 - 17:57:35 EST
When an unsafe region is entered on an HT, an IPI needs to be sent to
siblings to ensure they enter the kernel.
Following are the reasons why we would like to use irq_work to implement
forcing of sibling into kernel mode:
1. Existing smp_call infrastructure cannot be used easily since we could
end up waiting on CSD lock if previously an smp_call was not yet
serviced.
2. I'd like to use generic code, such that there is no need to add an
arch-specific IPI.
3. IRQ work already has support to detect that previous work was not yet
executed through the IRQ_WORK_PENDING bit.
4. We need to know if the destination of the IPI is not sending more
IPIs due to that IPI itself causing an entry into unsafe region.
Support for 4. requires us to be able to detect that irq_work is
pending.
This commit therefore adds a way for irq_work users to know if a
previous per-HT irq_work is pending. If it is, we need not send new
IPIs.
Memory ordering:
I was trying to handle the MP-pattern below. Consider the flag to be the
pending bit. P0() is the IRQ work handler. P1() is the code calling
irq_work_pending(). P0() already implicitly adds a memory barrier as a
part of the atomic_fetch_andnot() before calling work->func(). For P1(),
this patch adds the memory barrier as the atomic_read() in this patch's
irq_work_pending() is not sufficient.
P0()
{
WRITE_ONCE(buf, 1);
WRITE_ONCE(flag, 1);
}
P1()
{
int r1;
int r2 = 0;
r1 = READ_ONCE(flag);
if (r1)
r2 = READ_ONCE(buf);
}
Cc: paulmck@xxxxxxxxxx
Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
---
include/linux/irq_work.h | 1 +
kernel/irq_work.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 30823780c192..b26466f95d04 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -42,6 +42,7 @@ bool irq_work_queue_on(struct irq_work *work, int cpu);
void irq_work_tick(void);
void irq_work_sync(struct irq_work *work);
+bool irq_work_pending(struct irq_work *work);
#ifdef CONFIG_IRQ_WORK
#include <asm/irq_work.h>
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index eca83965b631..2d206d511aa0 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -24,6 +24,17 @@
static DEFINE_PER_CPU(struct llist_head, raised_list);
static DEFINE_PER_CPU(struct llist_head, lazy_list);
+bool irq_work_pending(struct irq_work *work)
+{
+ /*
+ * Provide ordering to callers who may read other stuff
+ * after the atomic read (MP-pattern).
+ */
+ bool ret = atomic_read_acquire(&work->flags) & IRQ_WORK_PENDING;
+
+ return ret;
+}
+
/*
* Claim the entry so that no one else will poke at it.
*/
--
2.28.0.220.ged08abb693-goog