Re: [RFC PATCH 1/2] mm,drm/ttm: Block fast GUP to TTM huge pages

From: Thomas Hellström (Intel)
Date: Wed Mar 24 2021 - 16:09:13 EST



On 3/24/21 7:31 PM, Christian König wrote:


Am 24.03.21 um 17:38 schrieb Jason Gunthorpe:
On Wed, Mar 24, 2021 at 04:50:14PM +0100, Thomas Hellström (Intel) wrote:
On 3/24/21 2:48 PM, Jason Gunthorpe wrote:
On Wed, Mar 24, 2021 at 02:35:38PM +0100, Thomas Hellström (Intel) wrote:

In an ideal world the creation/destruction of page table levels would
by dynamic at this point, like THP.
Hmm, but I'm not sure what problem we're trying to solve by changing the
interface in this way?
We are trying to make a sensible driver API to deal with huge pages.
Currently if the core vm requests a huge pud, we give it one, and if we
can't or don't want to (because of dirty-tracking, for example, which is
always done on 4K page-level) we just return VM_FAULT_FALLBACK, and the
fault is retried at a lower level.
Well, my thought would be to move the pte related stuff into
vmf_insert_range instead of recursing back via VM_FAULT_FALLBACK.

I don't know if the locking works out, but it feels cleaner that the
driver tells the vmf how big a page it can stuff in, not the vm
telling the driver to stuff in a certain size page which it might not
want to do.

Some devices want to work on a in-between page size like 64k so they
can't form 2M pages but they can stuff 64k of 4K pages in a batch on
every fault.
Hmm, yes, but we would in that case be limited anyway to insert ranges
smaller than and equal to the fault size to avoid extensive and possibly
unnecessary checks for contigous memory.
Why? The insert function is walking the page tables, it just updates
things as they are. It learns the arragement for free while doing the
walk.

The device has to always provide consistent data, if it overlaps into
pages that are already populated that is fine so long as it isn't
changing their addresses.

And then if we can't support the full fault size, we'd need to
either presume a size and alignment of the next level or search for
contigous memory in both directions around the fault address,
perhaps unnecessarily as well.
You don't really need to care about levels, the device should be
faulting in the largest memory regions it can within its efficiency.

If it works on 4M pages then it should be faulting 4M pages. The page
size of the underlying CPU doesn't really matter much other than some
tuning to impact how the device's allocator works.

Yes, but then we'd be adding a lot of complexity into this function that is already provided by the current interface for DAX, for little or no gain, at least in the drm/ttm setting. Please think of the following situation: You get a fault, you do an extensive time-consuming scan of your VRAM buffer object into which the fault goes and determine you can fault 1GB. Now you hand it to vmf_insert_range() and because the user-space address is misaligned, or already partly populated because of a previous eviction, you can only fault single pages, and you end up faulting a full GB of single pages perhaps for a one-time small update.

On top of this, unless we want to do the walk trying increasingly smaller sizes of vmf_insert_xxx(), we'd have to use apply_to_page_range() and teach it about transhuge page table entries, because pagewalk.c can't be used (It can't populate page tables). That also means apply_to_page_range() needs to be complicated with page table locks since transhuge pages aren't stable and can be zapped and refaulted under us while we do the walk.

On top of this, the user-space address allocator needs to know how large gpu pages are aligned in buffer objects to have a reasonable chance of aligning with CPU huge page boundaries which is a requirement to be able to insert a huge CPU page table entry, so the driver would basically need the drm helper that can do this alignment anyway.

All this makes me think we should settle for the current interface for now, and if someone feels like refining it, I'm fine with that.  After all, this isn't a strange drm/ttm invention, it's a pre-existing interface that we reuse.


I agree with Jason here.

We get the best efficiency when we look at the what the GPU driver provides and make sure that we handle one GPU page at once instead of looking to much into what the CPU is doing with it's page tables.

At least one AMD GPUs the GPU page size can be anything between 4KiB and 2GiB and if we will in a 2GiB chunk at once this can in theory be handled by just two giant page table entries on the CPU side.

Yes, but I fail to see why, with the current code, we can't do this (save the refcounting bug)?

/Thomas