Re: [PATCH v8 03/12] x86/mm: consolidate full flush threshold decision

From: Peter Zijlstra
Date: Wed Feb 05 2025 - 07:21:06 EST


On Tue, Feb 04, 2025 at 08:39:52PM -0500, Rik van Riel wrote:

> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6cf881a942bb..02e1f5c5bca3 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -1000,8 +1000,13 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
> BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
> #endif
>
> - info->start = start;
> - info->end = end;
> + /*
> + * Round the start and end addresses to the page size specified
> + * by the stride shift. This ensures partial pages at the end of
> + * a range get fully invalidated.
> + */
> + info->start = round_down(start, 1 << stride_shift);
> + info->end = round_up(end, 1 << stride_shift);
> info->mm = mm;
> info->stride_shift = stride_shift;
> info->freed_tables = freed_tables;

Rather than doing this; should we not fix whatever dodgy users are
feeding us non-page-aligned addresses for invalidation?