RE: [HMM 00/15] HMM (Heterogeneous Memory Management) v23
From: Bridgman, John
Date: Fri Jun 16 2017 - 03:22:21 EST
Hi Jerome,
I'm just getting back to this; sorry for the late responses.
Your description of HMM talks about blocking CPU accesses when a page has been migrated to device memory, and you treat that as a "given" in the HMM design. Other than BAR limits, coherency between CPU and device caches and performance on read-intensive CPU accesses to device memory are there any other reasons for this ?
The reason I'm asking is that we make fairly heavy use of large BAR support which allows the CPU to directly access all of the device memory on each of the GPUs, albeit without cache coherency, and there are some cases where it appears that allowing CPU access to the page in device memory would be more efficient than constantly migrating back and forth.
Migrating the page back and forth between device system memory appears at first glance to provide three benefits (albeit at a cost):
1. BAR limit - this is kind of a no-brainer, in the sense that if the CPU can not access the VRAM then you have to migrate it
2. coherency - having the CPU fault when page is in device memory or vice versa gives you an event which can be used to allow cache flushing on one device before handing ownership (from a cache perspective) to the other device - but at first glance you don't actually have to move the page to get that benefit
3. performance - CPU writes to device memory can be pretty fast since the transfers can be "fire and forget" but reads are always going to be slow because of the round-trip nature... but the tradeoff between access performance and migration overhead is more of a heuristic thing than a black-and-white thing
Do you see any HMM-related problems in principle with optionally leaving a page in device memory while the CPU is accessing it assuming that only one CPU/device "owns" the page from a cache POV at any given time ?
Thanks,
John
(btw apologies for what looks like top-posting - I tried inserting the questions a few different places in your patches but each time ended up messy)
>-----Original Message-----
>From: owner-linux-mm@xxxxxxxxx [mailto:owner-linux-mm@xxxxxxxxx] On
>Behalf Of JÃrÃme Glisse
>Sent: Wednesday, May 24, 2017 1:20 PM
>To: akpm@xxxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; linux-
>mm@xxxxxxxxx
>Cc: Dan Williams; Kirill A . Shutemov; John Hubbard; JÃrÃme Glisse
>Subject: [HMM 00/15] HMM (Heterogeneous Memory Management) v23
>
>Patchset is on top of git://git.cmpxchg.org/linux-mmotm.git so i test same
>kernel as kbuild system, git branch:
>
>https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-v23
>
>Change since v22 is use of static key for special ZONE_DEVICE case in
>put_page() and build fix for architecture with no mmu.
>
>Everything else is the same. Below is the long description of what HMM is
>about and why. At the end of this email i describe briefly each patch and
>suggest reviewers for each of them.