Re: [PATCH v4 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc
From: Muchun Song
Date: Mon Jan 22 2024 - 04:53:07 EST
> On Jan 22, 2024, at 17:14, Gang Li <gang.li@xxxxxxxxx> wrote:
>
> On 2024/1/22 14:16, Muchun Song wrote:
>> On 2024/1/18 20:39, Gang Li wrote:
>>> static struct folio *alloc_pool_huge_folio(struct hstate *h,
>>> nodemask_t *nodes_allowed,
>>> - nodemask_t *node_alloc_noretry)
>>> + nodemask_t *node_alloc_noretry,
>>> + int *next_node)
>>> {
>>> gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
>>> int nr_nodes, node;
>>> - for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) {
>>> + for_each_node_mask_to_alloc(next_node, nr_nodes, node, nodes_allowed) {
>> A small question here, why not pass h->next_nid_to_alloc to
>> for_each_node_mask_to_alloc()? What's the purpose of the third
>> parameter of alloc_pool_huge_folio()?
>> Thanks.
>
> In hugetlb_alloc_node->alloc_pool_huge_folio, hugetlb is initialized in
> parallel at boot time, then it needs each thread to have its own
> next_nid, and can't use the global h->next_nid_to_alloc. so an extra parameter is added.
Yes. When I read your patch 6, I realized this.
>
> And h->next_nid_to_alloc in set_max_huge_pages->alloc_pool_huge_folio
> can not be removed. Because if the user calls set_max_huge_pages
> frequently and only adds 1 page at a time, that would result in each
> request being made on the same node if local variables are used.