Re: [PATCH v2 1/3] mm: fixup pfnmap memory failure handling to use pgoff

From: Miaohe Lin

Date: Tue Dec 16 2025 - 22:10:25 EST


On 2025/12/13 12:47, ankita@xxxxxxxxxx wrote:
> From: Ankit Agrawal <ankita@xxxxxxxxxx>
>
> The memory failure handling implementation for the PFNMAP memory with no
> struct pages is faulty. The VA of the mapping is determined based on the
> the PFN. It should instead be based on the file mapping offset.
>
> At the occurrence of poison, the memory_failure_pfn is triggered on the
> poisoned PFN. Introduce a callback function that allows mm to translate
> the PFN to the corresponding file page offset. The kernel module using
> the registration API must implement the callback function and provide the
> translation. The translated value is then used to determine the VA
> information and sending the SIGBUS to the usermode process mapped to
> the poisoned PFN.
>
> The callback is also useful for the driver to be notified of the poisoned
> PFN, which may then track it.
>
> Fixes: 2ec41967189c ("mm: handle poisoning of pfn without struct pages")
>
> Suggested-by: Jason Gunthorpe <jgg@xxxxxxxxxx>
> Signed-off-by: Ankit Agrawal <ankita@xxxxxxxxxx>

Thanks for your patch.

> ---
> include/linux/memory-failure.h | 2 ++
> mm/memory-failure.c | 29 ++++++++++++++++++-----------
> 2 files changed, 20 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/memory-failure.h b/include/linux/memory-failure.h
> index bc326503d2d2..7b5e11cf905f 100644
> --- a/include/linux/memory-failure.h
> +++ b/include/linux/memory-failure.h
> @@ -9,6 +9,8 @@ struct pfn_address_space;
> struct pfn_address_space {
> struct interval_tree_node node;
> struct address_space *mapping;
> + int (*pfn_to_vma_pgoff)(struct vm_area_struct *vma,
> + unsigned long pfn, pgoff_t *pgoff);
> };
>
> int register_pfn_address_space(struct pfn_address_space *pfn_space);
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index fbc5a01260c8..c80c2907da33 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2161,6 +2161,9 @@ int register_pfn_address_space(struct pfn_address_space *pfn_space)
> {
> guard(mutex)(&pfn_space_lock);
>
> + if (!pfn_space->pfn_to_vma_pgoff)
> + return -EINVAL;
> +
> if (interval_tree_iter_first(&pfn_space_itree,
> pfn_space->node.start,
> pfn_space->node.last))
> @@ -2183,10 +2186,10 @@ void unregister_pfn_address_space(struct pfn_address_space *pfn_space)
> }
> EXPORT_SYMBOL_GPL(unregister_pfn_address_space);
>
> -static void add_to_kill_pfn(struct task_struct *tsk,
> - struct vm_area_struct *vma,
> - struct list_head *to_kill,
> - unsigned long pfn)
> +static void add_to_kill_pgoff(struct task_struct *tsk,
> + struct vm_area_struct *vma,
> + struct list_head *to_kill,
> + pgoff_t pgoff)
> {
> struct to_kill *tk;
>
> @@ -2197,12 +2200,12 @@ static void add_to_kill_pfn(struct task_struct *tsk,
> }
>
> /* Check for pgoff not backed by struct page */
> - tk->addr = vma_address(vma, pfn, 1);
> + tk->addr = vma_address(vma, pgoff, 1);
> tk->size_shift = PAGE_SHIFT;
>
> if (tk->addr == -EFAULT)
> pr_info("Unable to find address %lx in %s\n",
> - pfn, tsk->comm);
> + pgoff, tsk->comm);
>
> get_task_struct(tsk);
> tk->tsk = tsk;
> @@ -2212,11 +2215,12 @@ static void add_to_kill_pfn(struct task_struct *tsk,
> /*
> * Collect processes when the error hit a PFN not backed by struct page.
> */
> -static void collect_procs_pfn(struct address_space *mapping,
> +static void collect_procs_pfn(struct pfn_address_space *pfn_space,
> unsigned long pfn, struct list_head *to_kill)
> {
> struct vm_area_struct *vma;
> struct task_struct *tsk;
> + struct address_space *mapping = pfn_space->mapping;
>
> i_mmap_lock_read(mapping);
> rcu_read_lock();
> @@ -2226,9 +2230,12 @@ static void collect_procs_pfn(struct address_space *mapping,
> t = task_early_kill(tsk, true);
> if (!t)
> continue;
> - vma_interval_tree_foreach(vma, &mapping->i_mmap, pfn, pfn) {
> - if (vma->vm_mm == t->mm)
> - add_to_kill_pfn(t, vma, to_kill, pfn);
> + vma_interval_tree_foreach(vma, &mapping->i_mmap, 0, ULONG_MAX) {
> + pgoff_t pgoff;

IIUC, all vma will be traversed to find the final pgoff. This might not be a good idea
because rcu lock is held and this traversal might take a really long time. Or am I miss
something?

Thanks.
.