"zhangpeng (AS)" <zhangpeng362@xxxxxxxxxx> writes:
On 2023/11/24 12:26, Huang, Ying wrote:I think that you can start with the will-it-scale test case you used
"Huang, Ying" <ying.huang@xxxxxxxxx> writes:Yes, I will.
"zhangpeng (AS)" <zhangpeng362@xxxxxxxxxx> writes:And I think that you need to test ramdisk cases too to verify whether
On 2023/11/23 13:26, Yin Fengwei wrote:Whether is it improvement or reduction?
On 11/23/23 12:12, zhangpeng (AS) wrote:If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
On 2023/11/23 9:09, Yin Fengwei wrote:Is this verified by testing or just in theory?
Hi Peng,Thank you for your reply.
On 11/22/23 22:00, Peng Zhang wrote:
From: ZhangPeng <zhangpeng362@xxxxxxxxxx>I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
in application, which leading to an unexpected performance issue[1].
This caused by temporarily cleared pte during a read/modify/write update
of the pte, eg, do_numa_page()/change_pte_range().
For the data segment of the user-mode program, the global variable area
is a private mapping. After the pagecache is loaded, the private anonymous
page is generated after the COW is triggered. Mlockall can lock COW pages
(anonymous pages), but the original file pages cannot be locked and may
be reclaimed. If the global variable (private anon page) is accessed when
vmf->pte is zeroed in numa fault, a file page fault will be triggered.
At this time, the original private file page may have been reclaimed.
If the page cache is not available at this time, a major fault will be
triggered and the file will be read, causing additional overhead.
Fix this by rechecking the pte by holding ptl in filemap_fault() before
triggering a major fault.
[1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@xxxxxxxxxx/
Signed-off-by: ZhangPeng <zhangpeng362@xxxxxxxxxx>
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---
mm/filemap.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/mm/filemap.c b/mm/filemap.c
index 71f00539ac00..bb5e6a2790dc 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
mapping_locked = true;
}
} else {
+ pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+ if (ptep) {
+ /*
+ * Recheck pte with ptl locked as the pte can be cleared
+ * temporarily during a read/modify/write update.
+ */
+ if (unlikely(!pte_none(ptep_get(ptep))))
+ ret = VM_FAULT_NOPAGE;
+ pte_unmap_unlock(ptep, vmf->ptl);
+ if (unlikely(ret))
+ return ret;
+ }
If we don't take PTL, the current use case won't trigger this issue either.
this issue will also trigger. Without delay, we haven't reproduced this problem
so far.
Yes, there is a high probability that this issue won't occur without taking PTL.In most cases, if we don't take PTL, this issue won't be triggered. However,There is very limited operations between ptep_modify_prot_start() and
there is still a possibility of triggering this issue. The corner case is that
task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
check whether the PTE is not NONE before task 1 updates PTE in
ptep_modify_prot_commit() without taking PTL.
ptep_modify_prot_commit(). While the code path from page fault to this check is
long. My understanding is it's very likely the PTE is not NONE when do PTE check
here without hold PTL (This is my theory. :)).
In the other side, acquiring/releasing PTL may bring performance impaction. It mayWe tested the performance of file private mapping page fault (page_fault2.c of
not be big deal because the IO operations in this code path. But it's better to
collect some performance data IMHO.
will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
The difference in performance (in operations per second) before and after patch
applied is about 0.7% on a x86 physical machine.
this will cause performance regression and how much.
In addition, are there any ramdisk test cases recommended? 😁
before. And you can try some workload with large number of major fault,
like file read with mmap.
--
Best Regards,
Huang, Ying
--
Best Regards,
Huang, Ying
--
Best Regards,
Huang, Ying
[1] https://github.com/antonblanchard/will-it-scale/tree/master
Regards
Yin, Fengwei
Regards
Yin, Fengwei
+
/* No page in the page cache at all */
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);