Re: [PATCH mm] mm/page_alloc: Avoid second trylock of zone->lock

From: Vlastimil Babka
Date: Mon Mar 31 2025 - 08:18:02 EST


On 3/31/25 12:52, Michal Hocko wrote:
> On Sun 30-03-25 17:28:09, Alexei Starovoitov wrote:
>> From: Alexei Starovoitov <ast@xxxxxxxxxx>
>>
>> spin_trylock followed by spin_lock will cause extra write cache
>> access. If the lock is contended it may cause unnecessary cache
>> line bouncing

Right.

> and will execute redundant irq restore/save pair.

Maybe that part matters less if we're likely to have to spin anyway - it
doesn't affect other cpus at least unlike the bouncing.

>> Therefore, check alloc/fpi_flags first and use spin_trylock or
>> spin_lock.

Yeah this should be still ok for the zone lock as the fast paths are using
pcplists, so we still shouldn't be making page allocator slower due to the
try_alloc addition.

>> Suggested-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
>> Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
>> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
>
> Makes sense. Fixes tag is probably over reaching but whatever.

It's fixing 6.15-rc1 code so no possible stable implications anyway.

> Acked-by: Michal Hocko <mhocko@xxxxxxxx>

Acked-by: Vlastimil Babka <vbabka@xxxxxxx>

> Thanks!
>
>> ---
>> mm/page_alloc.c | 15 +++++++++------
>> 1 file changed, 9 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e3ea5bf5c459..ffbb5678bc2f 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1268,11 +1268,12 @@ static void free_one_page(struct zone *zone, struct page *page,
>> struct llist_head *llhead;
>> unsigned long flags;
>>
>> - if (!spin_trylock_irqsave(&zone->lock, flags)) {
>> - if (unlikely(fpi_flags & FPI_TRYLOCK)) {
>> + if (unlikely(fpi_flags & FPI_TRYLOCK)) {
>> + if (!spin_trylock_irqsave(&zone->lock, flags)) {
>> add_page_to_zone_llist(zone, page, order);
>> return;
>> }
>> + } else {
>> spin_lock_irqsave(&zone->lock, flags);
>> }
>>
>> @@ -2341,9 +2342,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
>> unsigned long flags;
>> int i;
>>
>> - if (!spin_trylock_irqsave(&zone->lock, flags)) {
>> - if (unlikely(alloc_flags & ALLOC_TRYLOCK))
>> + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
>> + if (!spin_trylock_irqsave(&zone->lock, flags))
>> return 0;
>> + } else {
>> spin_lock_irqsave(&zone->lock, flags);
>> }
>> for (i = 0; i < count; ++i) {
>> @@ -2964,9 +2966,10 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
>>
>> do {
>> page = NULL;
>> - if (!spin_trylock_irqsave(&zone->lock, flags)) {
>> - if (unlikely(alloc_flags & ALLOC_TRYLOCK))
>> + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
>> + if (!spin_trylock_irqsave(&zone->lock, flags))
>> return NULL;
>> + } else {
>> spin_lock_irqsave(&zone->lock, flags);
>> }
>> if (alloc_flags & ALLOC_HIGHATOMIC)
>> --
>> 2.47.1
>