Re: [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for use with device memory v2
From: David Nellans
Date: Fri Jan 06 2017 - 11:46:28 EST
On 01/06/2017 10:46 AM, JÃrÃme Glisse wrote:
> This patch add a new memory migration helpers, which migrate memory
> backing a range of virtual address of a process to different memory
> (which can be allocated through special allocator). It differs from
> numa migration by working on a range of virtual address and thus by
> doing migration in chunk that can be large enough to use DMA engine
> or special copy offloading engine.
>
> Expected users are any one with heterogeneous memory where different
> memory have different characteristics (latency, bandwidth, ...). As
> an example IBM platform with CAPI bus can make use of this feature
> to migrate between regular memory and CAPI device memory. New CPU
> architecture with a pool of high performance memory not manage as
> cache but presented as regular memory (while being faster and with
> lower latency than DDR) will also be prime user of this patch.
Why should the normal page migration path (where neither src nor dest are
device private), use the hmm_migrate functionality? 11-14 are
replicating a lot of the
normal migration functionality but with special casing for HMM
requirements. When migrating
THP's or a list of pages (your use case above), normal NUMA migration
is going to want to do this as fast as possible too (see Zi Yan's
patches for multi-threading normal
migrations & prototype of using intel IOAT for transfers, he sees 3-5x
speedup).
If the intention is to provide a common interface hook for migration to
use DMA acceleration
(which is a good idea), it probably shouldn't be special cased inside
HMM functionality.
For example, using the intel IOAT for migration DMA has nothing to do
with HMM
whatsoever. We need a normal migration path interface to allow DMA that
isn't tied
to HMM.