Re: [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock
From: Thomas Gleixner
Date: Wed Mar 31 2021 - 05:56:31 EST
On Mon, Mar 29 2021 at 13:06, Mel Gorman wrote:
> There is a lack of clarity of what exactly local_irq_save/local_irq_restore
> protects in page_alloc.c . It conflates the protection of per-cpu page
> allocation structures with per-cpu vmstat deltas.
>
> This patch protects the PCP structure using local_lock which
> for most configurations is identical to IRQ enabling/disabling.
> The scope of the lock is still wider than it should be but this is
> decreased in later patches. The per-cpu vmstat deltas are protected by
> preempt_disable/preempt_enable where necessary instead of relying on
> IRQ disable/enable.
Yes, this goes into the right direction and I really appreciate the
scoped protection for clarity sake.
> #ifdef CONFIG_MEMORY_HOTREMOVE
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 8a8f1a26b231..01b74ff73549 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -887,6 +887,7 @@ void cpu_vm_stats_fold(int cpu)
>
> pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
>
> + preempt_disable();
What's the reason for the preempt_disable() here? A comment would be
appreciated.
> for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
> if (pzstats->vm_stat_diff[i]) {
> int v;
> @@ -908,6 +909,7 @@ void cpu_vm_stats_fold(int cpu)
> global_numa_diff[i] += v;
> }
> #endif
> + preempt_enable();
> }
>
> for_each_online_pgdat(pgdat) {
> @@ -941,6 +943,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats)
> {
> int i;
>
> + preempt_disable();
Same here.
> for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
> if (pzstats->vm_stat_diff[i]) {
> int v = pzstats->vm_stat_diff[i];
> @@ -959,6 +962,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats)
> atomic_long_add(v, &vm_numa_stat[i]);
> }
> #endif
> + preempt_enable();
Thanks,
tglx