Re: [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock

From: Mel Gorman
Date: Fri Apr 29 2022 - 05:05:32 EST


On Tue, Apr 26, 2022 at 12:24:56PM -0700, Minchan Kim wrote:
> > @@ -3450,10 +3496,19 @@ void free_unref_page(struct page *page, unsigned int order)
> > void free_unref_page_list(struct list_head *list)
> > {
> > struct page *page, *next;
> > + struct per_cpu_pages *pcp;
> > + struct zone *locked_zone;
> > unsigned long flags;
> > int batch_count = 0;
> > int migratetype;
> >
> > + /*
> > + * An empty list is possible. Check early so that the later
> > + * lru_to_page() does not potentially read garbage.
> > + */
> > + if (list_empty(list))
> > + return;
> > +
> > /* Prepare pages for freeing */
> > list_for_each_entry_safe(page, next, list, lru) {
> > unsigned long pfn = page_to_pfn(page);
> > @@ -3474,8 +3529,26 @@ void free_unref_page_list(struct list_head *list)
> > }
> > }
> >
> > + VM_BUG_ON(in_hardirq());
>
> You need to check the list_empty here again and bail out if it is.
>

You're right, every page could have failed to prepare or was isolated.

--
Mel Gorman
SUSE Labs