[PATCH net-next 05/12] mm: Make the page_frag_cache allocator handle __GFP_ZERO itself

From: David Howells
Date: Wed May 24 2023 - 11:36:30 EST


Make the page_frag_cache allocator handle __GFP_ZERO itself rather than
passing it off to the page allocator. There may be a mix of callers, some
specifying __GFP_ZERO and some not - and even if all specify __GFP_ZERO, we
might refurbish the page, in which case the returned memory doesn't get
cleared.

This is a potential bug in the nvme over TCP driver.

Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
cc: Eric Dumazet <edumazet@xxxxxxxxxx>
cc: Jakub Kicinski <kuba@xxxxxxxxxx>
cc: Paolo Abeni <pabeni@xxxxxxxxxx>
cc: Jens Axboe <axboe@xxxxxxxxx>
cc: Jeroen de Borst <jeroendb@xxxxxxxxxx>
cc: Catherine Sullivan <csully@xxxxxxxxxx>
cc: Shailend Chand <shailend@xxxxxxxxxx>
cc: Felix Fietkau <nbd@xxxxxxxx>
cc: John Crispin <john@xxxxxxxxxxx>
cc: Sean Wang <sean.wang@xxxxxxxxxxxx>
cc: Mark Lee <Mark-MC.Lee@xxxxxxxxxxxx>
cc: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>
cc: Matthias Brugger <matthias.bgg@xxxxxxxxx>
cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@xxxxxxxxxxxxx>
cc: Keith Busch <kbusch@xxxxxxxxxx>
cc: Jens Axboe <axboe@xxxxxx>
cc: Christoph Hellwig <hch@xxxxxx>
cc: Sagi Grimberg <sagi@xxxxxxxxxxx>
cc: Chaitanya Kulkarni <kch@xxxxxxxxxx>
cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
cc: netdev@xxxxxxxxxxxxxxx
cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
cc: linux-mediatek@xxxxxxxxxxxxxxxxxxx
cc: linux-nvme@xxxxxxxxxxxxxxxxxxx
cc: linux-mm@xxxxxxxxx
---
mm/page_frag_alloc.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c
index ffd68bfb677d..2b73c7f5d9a9 100644
--- a/mm/page_frag_alloc.c
+++ b/mm/page_frag_alloc.c
@@ -23,7 +23,10 @@ static struct folio *page_frag_cache_refill(struct page_frag_cache *nc,
gfp_t gfp_mask)
{
struct folio *folio = NULL;
- gfp_t gfp = gfp_mask;
+ gfp_t gfp;
+
+ gfp_mask &= ~__GFP_ZERO;
+ gfp = gfp_mask;

#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
gfp_mask |= __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
@@ -71,6 +74,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
{
struct folio *folio = nc->folio;
size_t offset;
+ void *p;

WARN_ON_ONCE(!is_power_of_2(align));

@@ -133,7 +137,10 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
offset &= ~(align - 1);
nc->offset = offset;

- return folio_address(folio) + offset;
+ p = folio_address(folio) + offset;
+ if (gfp_mask & __GFP_ZERO)
+ return memset(p, 0, fragsz);
+ return p;
}
EXPORT_SYMBOL(page_frag_alloc_align);