Re: [PATCH bpf-next v2 06/10] xsk: Add check for unaligned descriptors that overrun UMEM

From: Magnus Karlsson
Date: Mon Apr 03 2023 - 08:23:32 EST


On Wed, 29 Mar 2023 at 20:11, Kal Conley <kal.conley@xxxxxxxxxxx> wrote:
>
> Make sure unaligned descriptors that straddle the end of the UMEM are
> considered invalid. This check needs to happen before the page boundary
> and contiguity checks in xp_desc_crosses_non_contig_pg(). Check this in
> xp_unaligned_validate_desc() instead like xp_check_unaligned() already
> does.
>
> Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
> Signed-off-by: Kal Conley <kal.conley@xxxxxxxxxxx>
> ---
> include/net/xsk_buff_pool.h | 9 ++-------
> net/xdp/xsk_queue.h | 1 +
> 2 files changed, 3 insertions(+), 7 deletions(-)
>
> diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
> index 3e952e569418..d318c769b445 100644
> --- a/include/net/xsk_buff_pool.h
> +++ b/include/net/xsk_buff_pool.h
> @@ -180,13 +180,8 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool,
> if (likely(!cross_pg))
> return false;
>
> - if (pool->dma_pages_cnt) {
> - return !(pool->dma_pages[addr >> PAGE_SHIFT] &
> - XSK_NEXT_PG_CONTIG_MASK);
> - }
> -
> - /* skb path */
> - return addr + len > pool->addrs_cnt;
> + return pool->dma_pages_cnt &&
> + !(pool->dma_pages[addr >> PAGE_SHIFT] & XSK_NEXT_PG_CONTIG_MASK);
> }
>
> static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr)
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index bfb2a7e50c26..66c6f57c9c44 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -162,6 +162,7 @@ static inline bool xp_unaligned_validate_desc(struct xsk_buff_pool *pool,
> return false;
>
> if (base_addr >= pool->addrs_cnt || addr >= pool->addrs_cnt ||
> + addr + desc->len > pool->addrs_cnt ||
> xp_desc_crosses_non_contig_pg(pool, addr, desc->len))
> return false;
>

Let me just check that I understand the conditions under which this
occurs. When selecting unaligned mode, there is no check that the size
is divisible by the chunk_size as is the case in aligned mode. So we
can register a umem that is for example 15 4K pages plus 100 bytes and
in this case the second to last page will be marked as contiguous
(with the CONTIG_MASK) and a packet of length 300 starting at 15*4K -
100 will be marked as valid even though it extends 100 bytes outside
the umem which ends at 15*4K + 100. Did I get this correctly? If so,
some more color in the commit message would be highly appreciated.

The best way around this would have been if we made sure that the umem
size was always divisible by PAGE_SIZE, but as there are users out
there that might have an unaligned umem of an slightly odd size, we
cannot risk breaking their program. PAGE_SIZE is also architecture
dependent and even configurable within some. So I think your solution
here is the right one.

This one should be considered a bug fix to and go to bpf. Good catch
if I understood the problem correctly above.



> --
> 2.39.2
>