Re: [PATCH -next 0/2] fs/epoll: loosen irq safety when possible
From: Davidlohr Bueso
Date: Sat Jul 21 2018 - 14:31:38 EST
On Sat, 21 Jul 2018, Peter Zijlstra wrote:
On Sat, Jul 21, 2018 at 10:21:20AM -0700, Davidlohr Bueso wrote:
On Fri, 20 Jul 2018, Andrew Morton wrote:
> We could open-code it locally. Add a couple of
> WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with
> Xen but surely just reading the thing isn't too expensive?
We could also pass on the responsibility to lockdep and just use
lockdep_assert_irqs_disabled(). But I guess that would be less effective
than to just open code it in epoll without lockdep -- note that over 80
places in the kernel do this.
The lockdep thing is relatively recent. I think someone proposed to go
replace a bunch of the open-coded ones at some point.
For the open coded checks, I'm seeing a small (1-2% ish) cost for bare
metal on workload 1). I don't see (via code inspection) any additional
overhead in xen either. While negligible in the overall of things, I do
like the idea of lockdep handling it nonetheless.
I can add the open coded version if people really feel that it would catch
more bugs (no lockdep users out there in production afaik :) in the long
term; but if lockdep is where things are headed...
Thanks,
Davidlohr
-------8<--------------------------------------------------------
[PATCH -next 3/2] fs/epoll: robustify irq safety with lockdep_assert_irqs_enabled()
Sprinkle lockdep_assert_irqs_enabled() checks in the functions that
do not save and restore interrupts when dealing with the ep->wq.lock.
These are ep_scan_ready_list() and those called by epoll_ctl():
ep_insert, ep_modify and ep_remove.
Signed-off-by: Davidlohr Bueso <dbueso@xxxxxxx>
---
fs/eventpoll.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 1b1abc461fc0..97b9b73dfec8 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -670,6 +670,9 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
struct epitem *epi, *nepi;
LIST_HEAD(txlist);
+ /* must not be called with irqs off */
+ lockdep_assert_irqs_enabled();
+
/*
* We need to lock this because we could be hit by
* eventpoll_release_file() and epoll_ctl().
@@ -764,6 +767,9 @@ static int ep_remove(struct eventpoll *ep, struct epitem *epi)
{
struct file *file = epi->ffd.file;
+ /* must not be called with irqs off */
+ lockdep_assert_irqs_enabled();
+
/*
* Removes poll wait queue hooks.
*/
@@ -1412,6 +1418,9 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
struct epitem *epi;
struct ep_pqueue epq;
+ /* must not be called with irqs off */
+ lockdep_assert_irqs_enabled();
+
user_watches = atomic_long_read(&ep->user->epoll_watches);
if (unlikely(user_watches >= max_user_watches))
return -ENOSPC;
@@ -1540,6 +1549,9 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi,
int pwake = 0;
poll_table pt;
+ /* must not be called with irqs off */
+ lockdep_assert_irqs_enabled();
+
init_poll_funcptr(&pt, NULL);
/*
--
2.16.4