Re: [PATCH v3 18/34] parisc: Implement the new page table range API

From: John David Anglin
Date: Thu Mar 02 2023 - 11:46:37 EST


On 2023-02-28 4:37 p.m., Matthew Wilcox (Oracle) wrote:
Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio()
and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag
from being per-page to per-folio.
I have tested this change on rp3440 at mainline commit e492250d5252635b6c97d52eddf2792ec26f1ec1
and c8000 at mainline commit ee3f96b164688dae21e2466a57f2e806b64e8a37.

So far, I haven't seen an issues on c8000.  On rp3440, I saw the following:

_swap_info_get: Unused swap offset entry 00000320
BUG: Bad page map in process buildd  pte:00032100 pmd:003606c3
addr:0000000000482000 vm_flags:00100077 anon_vma:0000000066f61340 mapping:0000000000000000 index:482
file:(null) fault:0x0 mmap:0x0 read_folio:0x0
CPU: 0 PID: 6813 Comm: buildd Not tainted 6.2.0+ #1
Hardware name: 9000/800/rp3440
Backtrace:
 [<000000004020af50>] show_stack+0x70/0x90
 [<0000000040b7d408>] dump_stack_lvl+0xd8/0x128
 [<0000000040b7d48c>] dump_stack+0x34/0x48
 [<00000000404513a4>] print_bad_pte+0x24c/0x318
 [<00000000404560dc>] zap_pte_range+0x8d4/0x958
 [<0000000040456398>] unmap_page_range+0x1d8/0x490
 [<000000004045681c>] unmap_vmas+0x10c/0x1a8
 [<0000000040466330>] exit_mmap+0x198/0x4a0
 [<0000000040235cbc>] mmput+0x114/0x2a8
 [<0000000040244e90>] do_exit+0x4e0/0xc68
 [<0000000040245938>] do_group_exit+0x68/0x128
 [<000000004025967c>] get_signal+0xae4/0xb60
 [<000000004021a570>] do_signal+0x50/0x228
 [<000000004021ab38>] do_notify_resume+0x68/0x150
 [<00000000402030b4>] intr_check_sig+0x38/0x3c

Disabling lock debugging due to kernel taint
_swap_info_get: Unused swap offset entry 000003a9
BUG: Bad page map in process buildd  pte:0003a940 pmd:003606c3
addr:0000000000523000 vm_flags:00100077 anon_vma:0000000066f61340 mapping:0000000000000000 index:523
file:(null) fault:0x0 mmap:0x0 read_folio:0x0
CPU: 2 PID: 6813 Comm: buildd Tainted: G    B              6.2.0+ #1
Hardware name: 9000/800/rp3440
Backtrace:
 [<000000004020af50>] show_stack+0x70/0x90
 [<0000000040b7d408>] dump_stack_lvl+0xd8/0x128
 [<0000000040b7d48c>] dump_stack+0x34/0x48
 [<00000000404513a4>] print_bad_pte+0x24c/0x318
 [<00000000404560dc>] zap_pte_range+0x8d4/0x958
 [<0000000040456398>] unmap_page_range+0x1d8/0x490
 [<000000004045681c>] unmap_vmas+0x10c/0x1a8
 [<0000000040466330>] exit_mmap+0x198/0x4a0
 [<0000000040235cbc>] mmput+0x114/0x2a8
 [<0000000040244e90>] do_exit+0x4e0/0xc68
 [<0000000040245938>] do_group_exit+0x68/0x128
 [<000000004025967c>] get_signal+0xae4/0xb60
 [<000000004021a570>] do_signal+0x50/0x228
 [<000000004021ab38>] do_notify_resume+0x68/0x150
 [<00000000402030b4>] intr_check_sig+0x38/0x3c
[...]
pagefault_out_of_memory: 1158973 callbacks suppressed
Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF

Rebooted rp3440.  Since then, I haven't seen any more problems.

Dave

--
John David Anglin dave.anglin@xxxxxxxx