Re: [PATCH 4/5] mm/swapfile: refcount block and queue before using blkcg_schedule_throttle()
From: Christoph Hellwig
Date: Wed Apr 15 2020 - 03:36:17 EST
On Wed, Apr 15, 2020 at 12:27:12AM -0700, Christoph Hellwig wrote:
> On Wed, Apr 15, 2020 at 05:42:34AM +0000, Luis Chamberlain wrote:
> > > I don't understand the atomic part of the comment. How does
> > > bdgrab/bdput help us there?
> >
> > The commit log above did a better job at explaining this in terms of our
> > goal to use the request_queue and how this use would prevent the risk of
> > releasing the request_queue, which could sleep.
>
> So bdput eventually does and iput, but what leads to an out of context
> offload?
>
> But anyway, isn't the original problem better solved by simply not
> releasing the queue from atomic context to start with? There isn't
> really any good reason we keep holding the spinlock once we have a
> reference on the queue, so something like this (not even compile tested)
> should do the work:
Actually - mem_cgroup_throttle_swaprate already checks for a non-NULL
current->throttle_queue above, so we should never even call
blk_put_queue here. Was this found by code inspection, or was there
a real report?
In the latter case we need to figure out what protects >throttle_queue,
as the way blkcg_schedule_throttle first put the reference and only then
assign a value to it already looks like it introduces a tiny race
window.
Otherwise just open coding the applicable part ofblkcg_schedule_throttle
in mem_cgroup_throttle_swaprate seems easiest:
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5871a2aa86a5..e16051ef074c 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3761,15 +3761,20 @@ void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node,
*/
if (current->throttle_queue)
return;
+ if (unlikely(current->flags & PF_KTHREAD))
+ return;
spin_lock(&swap_avail_lock);
plist_for_each_entry_safe(si, next, &swap_avail_heads[node],
avail_lists[node]) {
- if (si->bdev) {
- blkcg_schedule_throttle(bdev_get_queue(si->bdev),
- true);
- break;
+ if (!si->bdev)
+ continue;
+ if (blk_get_queue(dev_get_queue(si->bdev))) {
+ current->throttle_queue = dev_get_queue(si->bdev);
+ current->use_memdelay = true;
+ set_notify_resume(current);
}
+ break;
}
spin_unlock(&swap_avail_lock);
}