回复: [PATCH] mm: optimize memblock_add_range() for improved performance

From: Stephen Eta Zhou
Date: Fri Feb 07 2025 - 11:31:12 EST


Apologies for the multiple submissions

Hi Mike

I apologize for the multiple submissions of my previous emails. Unfortunately, due to formatting issues, the message was repeated unintentionally. I sincerely apologize for any inconvenience caused by this.

Please consider this email as the main one. If you have already seen the earlier submissions, kindly disregard them.

Thank you for your understanding, and I appreciate your patience.

Best regards,
Stephen

-----邮件原件-----
发件人: Stephen Eta Zhou <stephen.eta.zhou@xxxxxxxxxxx>
发送时间: 2025年2月8日 0:18
收件人: 'Mike Rapoport' <rppt@xxxxxxxxxx>
抄送: akpm@xxxxxxxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
主题: 回复: [PATCH] mm: optimize memblock_add_range() for improved performance

Hi Mike,

Thank you for your feedback and insights. I fully understand your concerns regarding the fragility of the code in `memblock_add_range()` and the challenges in handling edge cases. I also acknowledge your point that while the CPU cycle reductions are measurable, they may not fully capture the most important factor — the boot time speedup.

Regarding the performance metrics, I want to clarify that the main goal of my optimization was to improve the boot time during the early stages of kernel initialization. While reducing the CPU cycles for `memblock_add_range()` is a positive outcome, the real benefit is in reducing kernel initialization time, particularly before the buddy system takes over. I understand that the CPU cycle reduction alone doesn't address the critical question of boot time speedup, and I will work on measuring this more directly.

To address the concern about real hardware and corner cases, I plan to conduct further testing on actual hardware with different memory configurations to ensure the robustness of the patch. This will help ensure the stability and performance benefits are consistent across various setups.

I also plan to increase the testing coverage for edge cases and include more robust fallback mechanisms to address the potential fragility mentioned. I want to make sure the changes handle all possible scenarios gracefully.

In addition, I will focus on measuring boot time more directly using tools like `bootchart` and share the results, comparing the boot times before and after the optimization to demonstrate the actual impact on startup performance.

Given the concerns raised, I would greatly appreciate your guidance on whether you think it's worthwhile for me to continue with this approach. Should I proceed with further refinements and testing, or would you recommend a different direction for optimization? Your input will be invaluable in ensuring this patch meets both performance and stability goals.

Thank you again for your careful review, and I look forward to your thoughts.

Best regards,
Stephen

-----邮件原件-----
发件人: Mike Rapoport <rppt@xxxxxxxxxx>
发送时间: 2025年2月7日 22:59
收件人: Stephen Eta Zhou <stephen.eta.zhou@xxxxxxxxxxx>
抄送: akpm@xxxxxxxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
主题: Re: [PATCH] mm: optimize memblock_add_range() for improved performance

Hi Stephen,

On Wed, Feb 05, 2025 at 05:55:50AM +0000, Stephen Eta Zhou wrote:
> Hi Mike Rapoport、Andrew Morton

> I have recently been researching the mm subsystem of the Linux kernel,
> and I came across the memblock_add_range function, which piqued my
> interest. I found the implementation approach quite interesting, so I
> analyzed it and identified some areas for optimization. Starting with
> this part of the code:
>
> if (type->cnt * 2 + 1 <= type->max)
>       insert = true;
> The idea here is good, but it has a certain flaw. The condition is
> rather restrictive, and it cannot be executed initially. Moreover, it
> is only valid when the remaining space is (2/1) + 1. If there is
> enough memory, but it does not satisfy (2/1) + 1, the insertion
> operation still needs to be performed twice.

The code in memblock_add_range() is very fragile, and many attempts to remove the second pass that looked correct at the first glance failed for some corner case.

Unfortunately, it's impossible to capture all possible memory configurations and reservations in the memblock test suite, so even it it passes, there is a chance the kernel will fail to boot on an actual HW.

> - Before the patch:
> - Average: 1.22%
> - Max: 1.63%, Min: 0.93%
>
> - After the patch:
> - Average: 0.69%
> - Max: 0.94%, Min: 0.50%
>

These numbers do not represent what's actually interesting: the boot time speedup.

--
Sincerely yours,
Mike.