Re:Re: [PATCH] mm/page_alloc: enable alloc bulk when page owner is on
From: 杨欢
Date: Fri Jul 16 2021 - 22:02:46 EST
>> Last alloc bulk version have a bug, when page_owner is on, system maybe
>> crashed due to alloc bulk invoke prep_new_page in irq disabled context,
>> and set_page_owner use page_flag(may GFP_KERNEL) to get memory and save
>> stacktrace.
>> We fix it by a circumvention plan -- bandon alloc bulk feature when
>> page_owner is set.
>> I think both alloc_bulk and page_owner is valuable, so, it's worth to
>> find a way enable alloc bulk when page owner is on.
>
>Why do you think it's valuable? The point of alloc_bulk is speed.
>Doing the allocation of a stack for each page removes the speed.
>Where's the value?
Without alloc bulk, we also need to alloc pages from loop invoke
alloc_page, and this also will invoke set_page_owner to save stack.
So, find a way to let alloc_bulk work, I think is valuable.
If you don't think so, I sorry for my disturb.
>If you're really interested in making both these things work together,
>then I observed that all of these pages are going to have the same
>allocation stack. Allocating that stack once per call to alloc_bulk,
>reference counting it and having each page use the shared stack would
>be a good way of combining the two (it'd even save memory!)
Yes, your idea is very good.
But, how we let set_page_owner knew that recent pages are all in same stack?
Maybe save stack by caller, and passed handle in prep_new_page?
But, I think caller shouldn't care of page trace, it's page owner feature's duties.
Let same stack's page have same magic number maybe a good way, emm
But, even they have same stack, set_page_owner should know how to get memory
rightly (Unless you pass it to, so back to the duties).
In the past, we just pass page gfp to it, in alloc bulk, not right.
>This patch, I think, does not make sense.
Well, this is a simple way that I can find, and not affect perfermance both in normal and
PAGE_OWNER is on, but can work alloc bulk and PAGE_OWNER.
If you don't think so, I'm sorry for that