Re: [PATCH] mm: add stackdepot information on page->private for tracking

From: Zhaoyang Huang
Date: Sat Oct 08 2022 - 22:26:18 EST


On Fri, Oct 7, 2022 at 6:08 PM Vlastimil Babka <vbabka@xxxxxxx> wrote:
>
> On 10/6/22 05:19, zhaoyang.huang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> >
> > Private is vacant for most of Non-LRU pages while the user has explicitly
> > operation on page->private via set_page_private, I would like introduce
> > stackdepot information on page->private for a simplified tracking mechanism
> > which could be help for kernel driver's memory leak.
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
>
> This duplicates the existing page_owner functionality in a way that
> unconditionally adds overhead to all kernels that have CONFIG_STACKDEPOT
> enabled build-time (and also misses the need to initialize stackdepot properly).
Sure. This patch could be deemed as a light and complement of the page
owner which depends on proc fs in lived system for showing the result.
This patch could be mainly helpful on RAM dump as it is hard to find
page_ext for page owners. I also would like to make this optional via
defconfig item.
>
> Also wouldn't be suprised if some existing page->private users were actually
> confused by the field suddenly being non-zero without their own action.
IMO, the existing page->private users will cover this field directly
without distrubed by handle.

>
> > ---
> > mm/page_alloc.c | 28 +++++++++++++++++++++++++++-
> > 1 file changed, 27 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index e5486d4..b79a503 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -75,6 +75,7 @@
> > #include <linux/khugepaged.h>
> > #include <linux/buffer_head.h>
> > #include <linux/delayacct.h>
> > +#include <linux/stackdepot.h>
> > #include <asm/sections.h>
> > #include <asm/tlbflush.h>
> > #include <asm/div64.h>
> > @@ -2464,6 +2465,25 @@ static inline bool should_skip_init(gfp_t flags)
> > return (flags & __GFP_SKIP_ZERO);
> > }
> >
> > +#ifdef CONFIG_STACKDEPOT
> > +static noinline depot_stack_handle_t set_track_prepare(void)
> > +{
> > + depot_stack_handle_t trace_handle;
> > + unsigned long entries[16];
> > + unsigned int nr_entries;
> > +
> > + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3);
> > + trace_handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT);
> > +
> > + return trace_handle;
> > +}
> > +#else
> > +static inline depot_stack_handle_t set_track_prepare(void)
> > +{
> > + return 0;
> > +}
> > +#endif
> > +
> > inline void post_alloc_hook(struct page *page, unsigned int order,
> > gfp_t gfp_flags)
> > {
> > @@ -2471,8 +2491,14 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> > !should_skip_init(gfp_flags);
> > bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
> > int i;
> > + depot_stack_handle_t stack_handle = set_track_prepare();
> >
> > - set_page_private(page, 0);
> > + /*
> > + * Don't worry, user will cover private directly without checking
> > + * this field and has ability to trace the page. This also will not
> > + * affect expected state when freeing
> > + */
> > + set_page_private(page, stack_handle);
> > set_page_refcounted(page);
> >
> > arch_alloc_page(page, order);
>