On Thu, Mar 25, 2021 at 10:51:35AM +0100, Thomas Hellström (Intel) wrote:
This is all basically magic to me still, but THP does thisPlease explain that further. Why do we need the mmap lock to insert PMDsWe don't. But once you've inserted a PMD directory you can't remove it
but not when insert PTEs?
unless you have the mmap lock (and probably also the i_mmap_lock in write
mode). That for example means that if you have a VRAM region mapped with
huge PMDs, and then it gets evicted, and you happen to read a byte from it
when it's evicted and therefore populate the full region with PTEs pointing
to system pages, you can't go back to huge PMDs again without a munmap() in
between.
transformation and I think what it does could work here too. We
probably wouldn't be able to upgrade while handling fault, but at the
same time, this should be quite rare as it would require the driver to
have supplied a small page for this VMA at some point.
But this is a mmap-time problem, fault can't fix mmap using the wrong VA.Apart from that I still don't fully get why we need this in the firstBecause virtual huge page address boundaries need to be aligned with
place.
physical huge page address boundaries, and mmap can happen before bos are
populated so you have no way of knowing how physical huge page
address
If the underlying device block size is so big then sure, why not? TheI really don't see that either. When a buffer is accessed by the CPU itIt might be that you're right, but are all drivers wanting to use this like
is in > 90% of all cases completely accessed. Not faulting in full
ranges is just optimizing for a really unlikely case here.
drm in this respect? Using the interface to fault in a 1G range in the hope
it could map it to a huge pud may unexpectedly consume and populate some 16+
MB of page tables.
"unexpectedly" should be quite rare/non existant anyhow.
Jason