Re: [PATCH v2] sbitmap: fix io hung due to race on sbitmap_word::cleared

From: YangYang
Date: Fri Jun 07 2024 - 00:23:54 EST


On 2024/6/6 23:48, Ming Lei wrote:
On Thu, Jun 06, 2024 at 04:55:17PM +0800, YangYang wrote:
On 2024/6/4 14:12, Yu Kuai wrote:
Hi,

在 2024/06/04 11:25, Ming Lei 写道:
On Tue, Jun 4, 2024 at 11:12 AM Yang Yang <yang.yang@xxxxxxxx> wrote:

Configuration for sbq:
   depth=64, wake_batch=6, shift=6, map_nr=1

1. There are 64 requests in progress:
   map->word = 0xFFFFFFFFFFFFFFFF
2. After all the 64 requests complete, and no more requests come:
   map->word = 0xFFFFFFFFFFFFFFFF, map->cleared = 0xFFFFFFFFFFFFFFFF
3. Now two tasks try to allocate requests:
   T1:                                       T2:
   __blk_mq_get_tag                          .
   __sbitmap_queue_get                       .
   sbitmap_get                               .
   sbitmap_find_bit                          .
   sbitmap_find_bit_in_word                  .
   __sbitmap_get_word  -> nr=-1              __blk_mq_get_tag
   sbitmap_deferred_clear                    __sbitmap_queue_get
   /* map->cleared=0xFFFFFFFFFFFFFFFF */     sbitmap_find_bit
     if (!READ_ONCE(map->cleared))           sbitmap_find_bit_in_word
       return false;                         __sbitmap_get_word -> nr=-1
     mask = xchg(&map->cleared, 0)           sbitmap_deferred_clear
     atomic_long_andnot()                    /* map->cleared=0 */
                                               if (!(map->cleared))
                                                 return false;
                                      /*
                                       * map->cleared is cleared by T1
                                       * T2 fail to acquire the tag
                                       */

4. T2 is the sole tag waiter. When T1 puts the tag, T2 cannot be woken
up due to the wake_batch being set at 6. If no more requests come, T1
will wait here indefinitely.

To fix this issue, simply revert commit 661d4f55a794 ("sbitmap:
remove swap_lock"), which causes this issue.

I'd suggest to add the following words in commit log:

Check on ->cleared and update on both ->cleared and ->word need to be
done atomically, and using spinlock could be the simplest solution.

Otherwise, the patch looks fine for me.

Maybe I'm noob, but I'm confused how can this fix the problem, looks
like the race condition doesn't change.

In sbitmap_find_bit_in_word:

1) __sbitmap_get_word read word;
2) sbitmap_deferred_clear clear cleared;
3) sbitmap_deferred_clear update word;

2) and 3) are done atomically while 1) can still concurrent with 3):

t1:
sbitmap_find_bit_in_word
 __sbitmap_get_word
 -> read old word, return -1
        t2:
        sbitmap_find_bit_in_word
         __sbitmap_get_word
         -> read old word, return -1
 sbitmap_deferred_clear
 -> clear cleared and update word
        sbitmap_deferred_clear
        -> cleared is cleared, fail

BYW, I still think it's fine to fix this problem by trying the
__sbitmap_get_word() at least one more time if __sbitmap_get_word()
failed.

How about this one:
1. Add extra check in sbitmap_find_bit_in_word() referenced from
Yu kuai's suggestion.
2. Change from atomic_long_andnot to atomic_long_fetch_andnot_release

---
include/linux/sbitmap.h | 5 +++++
lib/sbitmap.c | 23 ++++++++++++++++++-----
2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
index d662cf136021..ec0b0e73c906 100644
--- a/include/linux/sbitmap.h
+++ b/include/linux/sbitmap.h
@@ -36,6 +36,11 @@ struct sbitmap_word {
* @cleared: word holding cleared bits
*/
unsigned long cleared ____cacheline_aligned_in_smp;
+
+ /**
+ * @swap_lock: Held while swapping word <-> cleared
+ */
+ spinlock_t swap_lock;
} ____cacheline_aligned_in_smp;

/**
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 1e453f825c05..63dadf91e40b 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -63,9 +63,13 @@ static inline void update_alloc_hint_after_get(struct sbitmap *sb,
static inline bool sbitmap_deferred_clear(struct sbitmap_word *map)
{
unsigned long mask;
+ bool ret = false;
+ unsigned long flags;

- if (!READ_ONCE(map->cleared))
- return false;
+ spin_lock_irqsave(&map->swap_lock, flags);
+
+ if (!map->cleared)
+ goto out_unlock;

/*
* First get a stable cleared mask, setting the old mask to 0.
@@ -75,9 +79,12 @@ static inline bool sbitmap_deferred_clear(struct sbitmap_word *map)
/*
* Now clear the masked bits in our free word
*/
- atomic_long_andnot(mask, (atomic_long_t *)&map->word);
+ atomic_long_fetch_andnot_release(mask, (atomic_long_t *)&map->word);
BUILD_BUG_ON(sizeof(atomic_long_t) != sizeof(map->word));
- return true;
+ ret = true;
+out_unlock:
+ spin_unlock_irqrestore(&map->swap_lock, flags);
+ return ret;
}

int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
@@ -85,6 +92,7 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
bool alloc_hint)
{
unsigned int bits_per_word;
+ int i;

if (shift < 0)
shift = sbitmap_calculate_shift(depth);
@@ -116,6 +124,9 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
return -ENOMEM;
}

+ for (i = 0; i < sb->map_nr; i++)
+ spin_lock_init(&sb->map[i].swap_lock);
+
return 0;
}
EXPORT_SYMBOL_GPL(sbitmap_init_node);
@@ -175,11 +186,13 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map,
int nr;

do {
+ unsigned long cleared = READ_ONCE(map->cleared);
+

->cleared is stored in standalone cacheline, the above line adds one extra l1
load, so I still prefer to check ->word & ->cleared in sbitmap_deferred_clear().

Yes, it's very reasonable.
I have sent V3, which does the check in sbitmap_deferred_clear()
referenced from your suggestion.

+ if (!map->cleared) {
+ ret = find_first_zero_bit(&map->word, depth) >= depth ? false : true;
+ goto out_unlock;
+ }

https://lore.kernel.org/linux-block/20240606135723.2794-1-yang.yang@xxxxxxxx/T/#u

Thanks.