Re: [PATCH v2] block: Fix general protection fault in bio_integrity_map_user()

From: Christoph Hellwig

Date: Tue Mar 31 2026 - 03:17:19 EST


On Mon, Mar 30, 2026 at 07:02:56PM -0400, Sungwoo Kim wrote:
> pin_user_pages_fast() can partially succeed and return the number of
> pages that were actually pinned. However, the bio_integrity_map_user()
> does not handle this partial pinning. This leads to a general protection
> fault since bvec_from_pages() dereferences an unpinned page address,
> which is 0.

Can you share the reproducer, or even better wire it up to blktests?

>
> To fix this, add a check to verify that all requested memory is pinned.
>
> KASAN splat:
>
> Oops: general protection fault, probably for non-canonical address 0xdffffc0000000001: 0000 [#1] PREEMPT SMP KASAN PTI
> KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f]
> RIP: 0010:_compound_head home/wukong/fuzznvme/linux/./include/linux/page-flags.h:240 [inline]
> RIP: 0010:bvec_from_pages home/wukong/fuzznvme/linux/block/bio-integrity.c:290 [inline]
>
> Fixes: 492c5d455969 ("block: bio-integrity: directly map user buffers")
> Acked-by: Chao Shi <cshi008@xxxxxxx>
> Acked-by: Weidong Zhu <weizhu@xxxxxxx>
> Acked-by: Dave Tian <daveti@xxxxxxxxxx>
> Signed-off-by: Sungwoo Kim <iam@xxxxxxxxxxxx>
> ---
> V1: https://lore.kernel.org/linux-block/20260308001358.1675543-2-iam@xxxxxxxxxxxx/T/#u
> V1->V2:
> - v1 incorrectly assumed pin_user_pages_fast() returns bytes. Fixed.
>
> block/bio-integrity.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/block/bio-integrity.c b/block/bio-integrity.c
> index 20f5d301d32d..992ce39e8ab9 100644
> --- a/block/bio-integrity.c
> +++ b/block/bio-integrity.c
> @@ -338,6 +338,15 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
> extraction_flags, &offset);
> if (unlikely(ret < 0))
> goto free_bvec;
> + if (unlikely(ret != nr_vecs)) {
> + for (int i = 0; i < ret; i++)
> + unpin_user_page(pages[i]);

I guess this works fine even for a negative ret, but it looks really
odd.

> + if (pages != stack_pages)
> + kvfree(pages);
> + ret = -EFAULT;
> + goto free_bvec;

This now loses the original return value if it alredy was
negative.

I think the better fix here would be to switch to
iov_iter_extract_bvecs, but that might be a bit too big for
a backportable bugfix, so I guess we should merge your patch
first once it is fixed up.