Re: [PATCH v1] mm/contpte: Optimize loop to reduce redundant operations
From: Lance Yang
Date: Sat Apr 12 2025 - 01:06:01 EST
On Sat, Apr 12, 2025 at 1:30 AM Dev Jain <dev.jain@xxxxxxx> wrote:
>
> +others
>
> On 11/04/25 2:55 am, Barry Song wrote:
> > On Mon, Apr 7, 2025 at 9:23 PM Xavier <xavier_qy@xxxxxxx> wrote:
> >>
> >> This commit optimizes the contpte_ptep_get function by adding early
> >> termination logic. It checks if the dirty and young bits of orig_pte
> >> are already set and skips redundant bit-setting operations during
> >> the loop. This reduces unnecessary iterations and improves performance.
> >>
> >> Signed-off-by: Xavier <xavier_qy@xxxxxxx>
> >> ---
> >> arch/arm64/mm/contpte.c | 13 +++++++++++--
> >> 1 file changed, 11 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> >> index bcac4f55f9c1..ca15d8f52d14 100644
> >> --- a/arch/arm64/mm/contpte.c
> >> +++ b/arch/arm64/mm/contpte.c
> >> @@ -163,17 +163,26 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
> >>
> >> pte_t pte;
> >> int i;
> >> + bool dirty = false;
> >> + bool young = false;
> >>
> >> ptep = contpte_align_down(ptep);
> >>
> >> for (i = 0; i < CONT_PTES; i++, ptep++) {
> >> pte = __ptep_get(ptep);
> >>
> >> - if (pte_dirty(pte))
> >> + if (!dirty && pte_dirty(pte)) {
> >> + dirty = true;
> >> orig_pte = pte_mkdirty(orig_pte);
> >> + }
> >>
> >> - if (pte_young(pte))
> >> + if (!young && pte_young(pte)) {
> >> + young = true;
> >> orig_pte = pte_mkyoung(orig_pte);
> >> + }
> >> +
> >> + if (dirty && young)
> >> + break;
> >
> > This kind of optimization is always tricky. Dev previously tried a similar
> > approach to reduce the loop count, but it ended up causing performance
> > degradation:
> > https://lore.kernel.org/linux-mm/20240913091902.1160520-1-dev.jain@xxxxxxx/
> >
> > So we may need actual data to validate this idea.
>
> The original v2 patch does not work, I changed it to the following:
>
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index bcac4f55f9c1..db0ad38601db 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -152,6 +152,16 @@ void __contpte_try_unfold(struct mm_struct *mm,
> unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(__contpte_try_unfold);
>
> +#define CHECK_CONTPTE_FLAG(start, ptep, orig_pte, flag) \
> + int _start; \
> + pte_t *_ptep = ptep; \
> + for (_start = start; _start < CONT_PTES; _start++, ptep++) { \
> + if (pte_##flag(__ptep_get(_ptep))) { \
> + orig_pte = pte_mk##flag(orig_pte); \
> + break; \
> + } \
> + }
> +
> pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
> {
> /*
> @@ -169,11 +179,17 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
> for (i = 0; i < CONT_PTES; i++, ptep++) {
> pte = __ptep_get(ptep);
>
> - if (pte_dirty(pte))
> + if (pte_dirty(pte)) {
> orig_pte = pte_mkdirty(orig_pte);
> + CHECK_CONTPTE_FLAG(i, ptep, orig_pte, young);
> + break;
> + }
>
> - if (pte_young(pte))
> + if (pte_young(pte)) {
> orig_pte = pte_mkyoung(orig_pte);
> + CHECK_CONTPTE_FLAG(i, ptep, orig_pte, dirty);
> + break;
> + }
> }
>
> return orig_pte;
>
> Some rudimentary testing with micromm reveals that this may be
> *slightly* faster. I cannot say for sure yet.
Yep, this change works as expected, IIUC.
However, I'm still wondering if the added complexity is worth it for
such a slight/negligible performance gain. That said, if we have
solid numbers/data to back it up, all doubts would disappear ;)
Thanks,
Lance
>
> >
> >> }
> >>
> >> return orig_pte;
> >> --
> >> 2.34.1
> >>
> >
> > Thanks
> > Barry
> >
>