Re: [PATCH 09/10] mm/hmm: allow to mirror vma of a file on a DAX backed filesystem
From: Jerome Glisse
Date: Tue Jan 29 2019 - 16:21:57 EST
On Tue, Jan 29, 2019 at 12:51:25PM -0800, Dan Williams wrote:
> On Tue, Jan 29, 2019 at 11:32 AM Jerome Glisse <jglisse@xxxxxxxxxx> wrote:
> >
> > On Tue, Jan 29, 2019 at 10:41:23AM -0800, Dan Williams wrote:
> > > On Tue, Jan 29, 2019 at 8:54 AM <jglisse@xxxxxxxxxx> wrote:
> > > >
> > > > From: Jérôme Glisse <jglisse@xxxxxxxxxx>
> > > >
> > > > This add support to mirror vma which is an mmap of a file which is on
> > > > a filesystem that using a DAX block device. There is no reason not to
> > > > support that case.
> > > >
> > >
> > > The reason not to support it would be if it gets in the way of future
> > > DAX development. How does this interact with MAP_SYNC? I'm also
> > > concerned if this complicates DAX reflink support. In general I'd
> > > rather prioritize fixing the places where DAX is broken today before
> > > adding more cross-subsystem entanglements. The unit tests for
> > > filesystems (xfstests) are readily accessible. How would I go about
> > > regression testing DAX + HMM interactions?
> >
> > HMM mirror CPU page table so anything you do to CPU page table will
> > be reflected to all HMM mirror user. So MAP_SYNC has no bearing here
> > whatsoever as all HMM mirror user must do cache coherent access to
> > range they mirror so from DAX point of view this is just _exactly_
> > the same as CPU access.
> >
> > Note that you can not migrate DAX memory to GPU memory and thus for a
> > mmap of a file on a filesystem that use a DAX block device then you can
> > not do migration to device memory. Also at this time migration of file
> > back page is only supported for cache coherent device memory so for
> > instance on OpenCAPI platform.
>
> Ok, this addresses the primary concern about maintenance burden. Thanks.
>
> However the changelog still amounts to a justification of "change
> this, because we can". At least, that's how it reads to me. Is there
> any positive benefit to merging this patch? Can you spell that out in
> the changelog?
There is 3 reasons for this:
1) Convert ODP to use HMM underneath so that we share code between
infiniband ODP and GPU drivers. ODP do support DAX today so i can
not convert ODP to HMM without also supporting DAX in HMM otherwise
i would regress the ODP features.
2) I expect people will be running GPGPU on computer with file that
use DAX and they will want to use HMM there too, in fact from user-
space point of view wether the file is DAX or not should only change
one thing ie for DAX file you will never be able to use GPU memory.
3) I want to convert as many user of GUP to HMM (already posted
several patchset to GPU mailing list for that and i intend to post
a v2 of those latter on). Using HMM avoids GUP and it will avoid
the GUP pin as here we abide by mmu notifier hence we do not want to
inhibit any of the filesystem regular operation. Some of those GPU
driver do allow GUP on DAX file. So again i can not regress them.
> > Bottom line is you just have to worry about the CPU page table. What
> > ever you do there will be reflected properly. It does not add any
> > burden to people working on DAX. Unless you want to modify CPU page
> > table without calling mmu notifier but in that case you would not
> > only break HMM mirror user but other thing like KVM ...
> >
> >
> > For testing the issue is what do you want to test ? Do you want to test
> > that a device properly mirror some mmap of a file back by DAX ? ie
> > device driver which use HMM mirror keep working after changes made to
> > DAX.
> >
> > Or do you want to run filesystem test suite using the GPU to access
> > mmap of the file (read or write) instead of the CPU ? In that case any
> > such test suite would need to be updated to be able to use something
> > like OpenCL for. At this time i do not see much need for that but maybe
> > this is something people would like to see.
>
> In general, as HMM grows intercept points throughout the mm it would
> be helpful to be able to sanity check the implementation.
I usualy use a combination of simple OpenCL programs and hand tailor direct
ioctl hack to force specific code path to happen. I should probably create
a repository with a set of OpenCL tests so that other can also use them.
I need to clean those up into something not too ugly so i am not ashame
of them.
Also at this time the OpenCL bits are not in any distro, most of the bits
are in mesa and Karol and others are doing a great jobs at polishing things
and getting all the bits in. I do expect that in couple months the mainline
of all projects (LLVM, Mesa, libdrm, ...) will have all the bits and then it
will trickle down to your favorite distribution (assuming they build mesa
with OpenCL enabled).
Cheers,
Jérôme