Re: [PATCH v19 2/5] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs
From: Muhammad Usama Anjum
Date: Tue Jun 20 2023 - 07:20:11 EST
On 6/19/23 11:06 AM, Muhammad Usama Anjum wrote:
> On 6/17/23 11:39 AM, Andrei Vagin wrote:
>> On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
>>> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>>> + unsigned long end, struct mm_walk *walk)
>>> +{
>>> + bool is_written, flush = false, is_interesting = true;
>>> + struct pagemap_scan_private *p = walk->private;
>>> + struct vm_area_struct *vma = walk->vma;
>>> + unsigned long bitmap, addr = end;
>>> + pte_t *pte, *orig_pte, ptent;
>>> + spinlock_t *ptl;
>>> + int ret = 0;
>>> +
>>> + arch_enter_lazy_mmu_mode();
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + ptl = pmd_trans_huge_lock(pmd, vma);
>>> + if (ptl) {
>>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>>> +
>>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>>> + n_pages = p->max_pages - p->found_pages;
>>> +
>>> + is_written = !is_pmd_uffd_wp(*pmd);
>>> +
>>> + /*
>>> + * Break huge page into small pages if the WP operation need to
>>> + * be performed is on a portion of the huge page.
>>> + */
>>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>>> + spin_unlock(ptl);
>>> +
>>> + split_huge_pmd(vma, pmd, start);
>>> + goto process_smaller_pages;
>>> + }
>>> +
>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>> + pmd_present(*pmd), is_swap_pmd(*pmd));
>>> +
>>> + if (IS_PM_SCAN_GET(p->flags)) {
>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>>> + if (is_interesting)
>>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>>> + }
>>> +
>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>>> + ret >= 0) {
>>> + make_uffd_wp_pmd(vma, start, pmd);
>>> + flush_tlb_range(vma, start, end);
>>> + }
>>> +
>>> + spin_unlock(ptl);
>>> +
>>> + arch_leave_lazy_mmu_mode();
>>> + return ret;
>>> + }
>>> +
>>> +process_smaller_pages:
>>> +#endif
>>> +
>>> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
>>> + if (!pte) {
>>
>> Do we need to unlock ptl here?
>>
>> spin_unlock(ptl);
> No, please look at these recently merged patches:
> https://lore.kernel.org/all/c1c9a74a-bc5b-15ea-e5d2-8ec34bc921d@xxxxxxxxxx
>
>>
>>> + walk->action = ACTION_AGAIN;
>>> + return 0;
>>> + }
>>> +
>>> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
>>> + ptent = ptep_get(pte);
>>> + is_written = !is_pte_uffd_wp(ptent);
>>> +
>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>> + pte_present(ptent), is_swap_pte(ptent));
>>
>> The vma->vm_file check isn't correct in this case. You can look when
>> pte_to_pagemap_entry sets PM_FILE. This flag is used to detect what
>> pages have a file backing store and what pages are anonymous.
> I'll update.
>
>>
>> I was trying to integrate this new interace into CRIU and I found
>> one more thing that is required. We need to detect zero pages.
Can we not add this zero page flag now as we are already at v20? If you
have time to review and test the patches, then something can be done.
> Should we name it ZERO_PFN_PRESENT_PAGE to be exact or what?
>
>>
>> It should look something like this:
>>
>> #define PM_SCAN_FLAGS(wt, file, present, swap, zero) \
>> ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3) | ((zero) << 4))
>>
>>
>> bitmap = PM_SCAN_FLAGS(is_written, page && !PageAnon(page),
>> pte_present(ptent), is_swap_pte(ptent),
>> pte_present(ptent) && is_zero_pfn(pte_pfn(ptent)));
> Okay. Can you please confirm my assumptions:
> - A THP cannot be file backed. (PM_FILE isn't being set for THP case)
> - A hole is also not file backed.
>
> A hole isn't present in memory. So its pfn would be zero. But as it isn't
> present, it shouldn't report zero page. Right? For hole::
>
> PM_SCAN_FLAGS(false, false, false, false, false)
Please let me know about the test results you have been doing.
>
>
>>
>> Thanks,
>> Andrei
>
--
BR,
Muhammad Usama Anjum