Re: [PATCH v1 2/7] mm: make zap_pte_range() handle full within-PMD range
From: Jann Horn
Date: Thu Oct 17 2024 - 14:07:45 EST
On Thu, Oct 17, 2024 at 11:48 AM Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> wrote:
> In preparation for reclaiming empty PTE pages, this commit first makes
> zap_pte_range() to handle the full within-PMD range, so that we can more
> easily detect and free PTE pages in this function in subsequent commits.
I think your patch causes some unintended difference in behavior:
> Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
> ---
> mm/memory.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index caa6ed0a7fe5b..fd57c0f49fce2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1602,6 +1602,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
> swp_entry_t entry;
> int nr;
>
> +retry:
This "retry" label is below the line "bool force_flush = false,
force_break = false;", so I think after force_break is set once and
you go through the retry path, every subsequent present PTE will again
bail out and retry. I think that doesn't lead to anything bad, but it
seems unintended.