On Mon, Oct 04, 2021 at 08:58:35AM +0200, Christian König wrote:
I'm not following this discussion to closely, but try to look into it fromThere are two things to keep in mind, flushing the PTEs from the HW
time to time.
Am 01.10.21 um 19:45 schrieb Jason Gunthorpe:
On Fri, Oct 01, 2021 at 11:01:49AM -0600, Logan Gunthorpe wrote:Wow, wait a second. That is quite a boomer. At least in all GEM/TTM based
In device-dax, the refcount is only used to prevent the device, andBy address space invalidation I mean invalidation of the VMA that is
therefore the pages, from going away on device unbind. Pages cannot be
recycled, as you say, as they are mapped linearly within the device. The
address space invalidation is done only when the device is unbound.
pointing to those pages.
device-dax may not have a issue with use-after-VMA-invalidation by
it's very nature since every PFN always points to the same
thing. fsdax and this p2p stuff are different though.
Before the invalidation, an active flag is cleared to ensure no newAFIAK unmap_mapping_range() kicks off the TLB flush and then
mappings can be created while the unmap is proceeding.
unmap_mapping_range() should sequence itself with the TLB flush and
returns. It doesn't always wait for the flush to fully finish. Ie some
cases use RCU to lock the page table against GUP fast and so the
put_page() doesn't happen until the call_rcu completes - after a grace
period. The unmap_mapping_range() does not wait for grace periods.
graphics drivers that could potentially cause a lot of trouble.
I've just double checked and we certainly have the assumption that when
unmap_mapping_range() returns the pte is gone and the TLB flush completed in
quite a number of places.
Do you have more information when and why that can happen?
and serializing against gup_fast.
If you start at unmap_mapping_range() the page is eventually
discovered in zap_pte_range() and the PTE cleared. It is then passed
into __tlb_remove_page() which puts it on the batch->pages list
The page free happens in tlb_batch_pages_flush() via
free_pages_and_swap_cache()
The tlb_batch_pages_flush() happens via zap_page_range() ->
tlb_finish_mmu(), presumably after the HW has wiped the TLB's on all
CPUs. On x86 this is done with an IPI and also serializes gup fast, so
OK
The interesting case is CONFIG_MMU_GATHER_RCU_TABLE_FREE which doesn't
rely on IPIs anymore to synchronize with gup-fast.
In this configuration it means when unmap_mapping_range() returns the
TLB will have been flushed, but no serialization with GUP fast was
done.
This is OK if the GUP fast cannot return the page at all. I assume
this generally describes the DRM caes?
However, if the GUP fast can return the page then something,
somewhere, needs to serialize the page free with the RCU as the GUP
fast can be observing the old PTE before it was zap'd until the RCU
grace expires.
Relying on the page ref being !0 to protect GUP fast is not safe
because the page ref can be incr'd immediately upon page re-use.
Interestingly I looked around for this on PPC and I only found RCU
delayed freeing of the page table level, not RCU delayed freeing of
pages themselves.. I wonder if it was missed?
There is a path on PPC (tlb_remove_table_sync_one) that triggers an
IPI but it looks like an exception, and we wouldn't need the RCU at
all if we used IPI to serialize GUP fast...
It makes logical sense if the RCU also frees the pages on
CONFIG_MMU_GATHER_RCU_TABLE_FREE so anything returnable by GUP fast
must be refcounted and freed by tlb_batch_pages_flush(), not by the
caller of unmap_mapping_range().
If we expect to allow the caller of unmap_mapping_range() to free then
CONFIG_MMU_GATHER_RCU_TABLE_FREE can't really exist, we always need to
trigger a serializing IPI during tlb_batch_pages_flush()
AFAICT, at least
Jason