Re: [PATCH 0/3] mm/page_owner: add options 'print_handle' and 'print_stack' for 'show_stacks'

From: Mauricio Faria de Oliveira
Date: Thu Sep 25 2025 - 15:40:04 EST


On 2025-09-25 13:08, Michal Hocko wrote:
> On Wed 24-09-25 14:40:20, Mauricio Faria de Oliveira wrote:
>> Problem:
>>
>> The use case of monitoring the memory usage per stack trace (or tracking
>> a particular stack trace) requires uniquely identifying each stack trace.
>>
>> This has to be done for every stack trace on every sample of monitoring,
>> even if tracking only one stack trace (to identify it among all others).
>>
>> Therefore, an approach like, for example, hashing the stack traces from
>> 'show_stacks' for calculating unique identifiers can become expensive.
>>
>> Solution:
>>
>> Fortunately, the kernel can provide a unique identifier for stack traces
>> in page_owner, which is the handle number in stackdepot.
>>
>> Additionally, with that information, the stack traces themselves are not
>> needed until the time when the memory usage should be associated with a
>> stack trace (say, to look at a few top consumers), using handle numbers.
>>
>> This eliminates hashing and reduces filtering related to stack traces in
>> userspace, and reduces text emitted/copied by the kernel.
>
> Let's see if I understand this correctly. You are suggesting trimming
> down the output to effectivelly key, value pair and only resolve the key
> once per debugging session because keys do not change and you do not
> need the full stack traces that maps to the key. Correct?

Yes, exactly.

> Could you elaborate some more on why the performance really matters here?

Sure.

One reason is optimizing data processing.

Currently, the step to obtain the key of a strack trace (e.g., hashing)
incurs
a considerable work (done for all stack traces, on every sample) that
actually
is duplicated work (the same result for each stack trace, on every
sample).

That calculation is a significant overhead compared to the operation
it's done
for, which is '(calculated) key = memory usage'.

Thus, optimizing that step to just reading the key from the kernel would
save
resources (processing) and time (e.g., waiting for results to be ready,
on post
processing; or reducing the time required per sample, on live
monitoring).

Another reason is optimizing data collection.

There is some overhead in periodically waking-up, reading and storing
data, and
later in filtering it. (Admittedly, much less significant than the
above.)

However, despite being a minor improvement, it actually prevents the
production
of data that is discarded at consumption; that helps both producer and
consumer.

The cumulative improvement may be interesting over very long profiling
sessions.

Hope this addresses your question. Happy to provide more context or
details.

Thanks,

--
Mauricio