Re: [tip:numa/core] sched/numa/mm: Improve migration

From: Mel Gorman
Date: Mon Oct 22 2012 - 04:06:38 EST

On Thu, Oct 18, 2012 at 10:05:39AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID: 713f937655c4b15131b5a0eae4610918a4febe17
> Gitweb:
> Author: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
> AuthorDate: Fri, 12 Oct 2012 19:30:14 +0200
> Committer: Ingo Molnar <mingo@xxxxxxxxxx>
> CommitDate: Mon, 15 Oct 2012 14:18:40 +0200
> sched/numa/mm: Improve migration
> Add THP migration. Extend task_numa_fault() to absorb THP faults.
> [ Would be nice if the gents on Cc: expressed their opinion about
> this change. A missing detail might be cgroup page accounting,
> plus the fact that some architectures might cache PMD_NONE pmds
> in their TLBs, needing some extra TLB magic beyond what we already
> do here? ]

I'm travelling for a conference at the moment so will not get the chance
to properly review this until I get back. Is there any plan to post the
schednuma patches to linux-mm so the full series can be reviewed? I can
extract the patches from -tip when I get back but it's still less than
ideal from a review standpoint.

Superficially, the patch looks ok but as I lack context on what the
rest of schednuma looks like I cannot be sure so I'm not going to ack
it. Basically this is very similar to __unmap_and_move except it doesn't
deal with migration PTEs -- presumably because the PTE is PROT_NONE so it
gets queued up behind it. There is a downside of that. With migration PTEs,
faults during migration will wait on the PTE. With this approach, I think
multiple faults will alloc a hugepage, realise the ptes are no longer the
same and back off. It should still work but it's potentially more expensive.
Was that considered? Is it deliberate? If so, why?

It also feels like the migration part should have been a helper function
called unmap_and_move_thp() in migrate.c instead of being buried in

Mel Gorman
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at