HMM (heterogeneous memory management) v5
From: j . glisse
Date: Mon Nov 03 2014 - 15:46:43 EST
Andrew i received no feedback since last time i sent this patchset, so i
would really like to have it merge for next kernel. While right now there
is no kernel driver that leverage this code, the hardware is coming and we
still have a long way to go before we have all the features needed. Right
now i am blocking any further work on the merge of this core code.
(Note that patch 5 the dummy driver is included as reference and should not
be merge unless you want me to grow it into some testing infrastructure. I
only include it here so people can have a look on how HMM is suppose to be
use).
What it is ?
In a nutshell HMM is a subsystem that provide an easy to use api to mirror a
process address on a device with minimal hardware requirement (mainly device
page fault and read only page mapping). This does not rely on ATS and PASID
PCIE extensions. It intends to supersede those extensions by allowing to move
system memory to device memory in a transparent fashion for core kernel mm
code (ie cpu page fault on page residing in device memory will trigger
migration back to system memory).
Why doing this ?
We want to be able to mirror a process address space so that compute api such
as OpenCL or other similar api can start using the exact same address space on
the GPU as on the CPU. This will greatly simplify usages of those api. Moreover
we believe that we will see more and more specialize unit functions that will
want to mirror process address using their own mmu.
The migration side is simply because GPU memory bandwidth is far beyond than
system memory bandwith and there is no sign that this gap is closing (quite the
opposite).
Current status and future features :
None of this core code change in any major way core kernel mm code. This
is simple ground work with no impact on existing code path. Features that
will be implemented on top of this are :
1 - Tansparently handle page mapping on behalf of device driver (DMA).
2 - Improve DMA api to better match new usage pattern of HMM.
3 - Migration of anonymous memory to device memory.
4 - Locking memory to remote memory (CPU access triger SIGBUS).
5 - Access exclusion btw CPU and device for atomic operations.
6 - Migration of file backed memory to device memory.
How future features will be implemented :
1 - Simply use existing DMA api to map page on behalf of a device.
2 - Introduce new DMA api to match new semantic of HMM. It is no longer page
we map but address range and managing which page is effectively backing
an address should be easy to update. I gave a presentation about that
during this LPC.
3 - Requires change to cpu page fault code path to handle migration back to
system memory on cpu access. An implementation of this was already sent
as part of v1. This will be low impact and only add a new special swap
type handling to existing fault code.
4 - Require a new syscall as i can not see which current syscall would be
appropriate for this. My first feeling was to use mbind as it has the
right semantic (binding a range of address to a device) but mbind is
too numa centric.
Second one was madvise, but semantic does not match, madvise does allow
kernel to ignore them while we do want to block cpu access for as long
as the range is bind to a device.
So i do not think any of existing syscall can be extended with new flags
but maybe i am wrong.
5 - Allowing to map a page as read only on the CPU while a device perform
some atomic operation on it (this is mainly to work around system bus
that do not support atomic memory access and sadly there is a large
base of hardware without that feature).
Easiest implementation would be using some page flags but there is none
left. So it must be some flags in vma to know if there is a need to query
HMM for write protection.
6 - This is the trickiest one to implement and while i showed a proof of
concept with v1, i am still have a lot of conflictual feeling about how
to achieve this.
As usual comments are more then welcome. Thanks in advance to anyone that
take a look at this code.
Previous patchset posting :
v1 http://lwn.net/Articles/597289/
v2 https://lkml.org/lkml/2014/6/12/559 (cover letter did not make it to ml)
v3 https://lkml.org/lkml/2014/6/13/633
v4 https://lkml.org/lkml/2014/8/29/423
Cheers,
JÃrÃme
To: "Andrew Morton" <akpm@xxxxxxxxxxxxxxxxxxxx>,
Cc: <linux-kernel@xxxxxxxxxxxxxxx>,
Cc: linux-mm <linux-mm@xxxxxxxxx>,
Cc: <linux-fsdevel@xxxxxxxxxxxxxxx>,
Cc: "Linus Torvalds" <torvalds@xxxxxxxxxxxxxxxxxxxx>,
Cc: "Mel Gorman" <mgorman@xxxxxxx>,
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>,
Cc: "Peter Zijlstra" <peterz@xxxxxxxxxxxxx>,
Cc: "Linda Wang" <lwang@xxxxxxxxxx>,
Cc: "Kevin E Martin" <kem@xxxxxxxxxx>,
Cc: "Jerome Glisse" <jglisse@xxxxxxxxxx>,
Cc: "Andrea Arcangeli" <aarcange@xxxxxxxxxx>,
Cc: "Johannes Weiner" <jweiner@xxxxxxxxxx>,
Cc: "Larry Woodman" <lwoodman@xxxxxxxxxx>,
Cc: "Rik van Riel" <riel@xxxxxxxxxx>,
Cc: "Dave Airlie" <airlied@xxxxxxxxxx>,
Cc: "Jeff Law" <law@xxxxxxxxxx>,
Cc: "Brendan Conoboy" <blc@xxxxxxxxxx>,
Cc: "Joe Donohue" <jdonohue@xxxxxxxxxx>,
Cc: "Duncan Poole" <dpoole@xxxxxxxxxx>,
Cc: "Sherry Cheung" <SCheung@xxxxxxxxxx>,
Cc: "Subhash Gutti" <sgutti@xxxxxxxxxx>,
Cc: "John Hubbard" <jhubbard@xxxxxxxxxx>,
Cc: "Mark Hairgrove" <mhairgrove@xxxxxxxxxx>,
Cc: "Lucien Dunning" <ldunning@xxxxxxxxxx>,
Cc: "Cameron Buschardt" <cabuschardt@xxxxxxxxxx>,
Cc: "Arvind Gopalakrishnan" <arvindg@xxxxxxxxxx>,
Cc: "Haggai Eran" <haggaie@xxxxxxxxxxxx>,
Cc: "Or Gerlitz" <ogerlitz@xxxxxxxxxxxx>,
Cc: "Sagi Grimberg" <sagig@xxxxxxxxxxxx>
Cc: "Shachar Raindel" <raindel@xxxxxxxxxxxx>,
Cc: "Liran Liss" <liranl@xxxxxxxxxxxx>,
Cc: "Roland Dreier" <roland@xxxxxxxxxxxxxxx>,
Cc: "Sander, Ben" <ben.sander@xxxxxxx>,
Cc: "Stoner, Greg" <Greg.Stoner@xxxxxxx>,
Cc: "Bridgman, John" <John.Bridgman@xxxxxxx>,
Cc: "Mantor, Michael" <Michael.Mantor@xxxxxxx>,
Cc: "Blinzer, Paul" <Paul.Blinzer@xxxxxxx>,
Cc: "Morichetti, Laurent" <Laurent.Morichetti@xxxxxxx>,
Cc: "Deucher, Alexander" <Alexander.Deucher@xxxxxxx>,
Cc: "Gabbay, Oded" <Oded.Gabbay@xxxxxxx>,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/