Re: [PATCH] driver core: ensure a device has valid node id in device_add()

From: Yunsheng Lin
Date: Tue Sep 10 2019 - 06:41:33 EST

On 2019/9/10 15:24, Michal Hocko wrote:
> Our emails crossed, sorry about that.
> On Tue 10-09-19 15:08:20, Yunsheng Lin wrote:
>> On 2019/9/10 2:50, Michal Hocko wrote:
>>> On Mon 09-09-19 14:04:23, Yunsheng Lin wrote:
> [...]
>>>> Even if a device's numa node is not specified, the device really
>>>> does belong to a node.
>>> What does this mean?
>> It means some one need to guess the node id if the node is not
>> specified.
> I have asked about this in other email so let's not cross the
> communication even more.

I may just answer your question here.

Besides the page allocator, cpu allocator or scheduler may need to
know about the node id to figure out which cpu to run is more
optimized, like in workqueue_select_cpu_near().

>>>> This patch sets the device node to node 0 in device_add() if the
>>>> device's node id is not specified and it either has no parent
>>>> device, or the parent device also does not have a valid node id.
>>> Why is node 0 special? I have seen platforms with node 0 missing or
>>> being memory less. The changelog also lacks an actual problem
>> by node 0 missing, how do we know if node 0 is missing?
>> by node_online(0)?
> No, this is a dynamic situation. Node might get offline via hotremove.
> In most cases it wouldn't because there will likely be some kernel
> memory on node0 but you cannot really make any assumptions here. Besides
> that nothing should really care.

>From the node checking:
'(unsigned)node_id >= nr_node_ids'

If the nr_node_ids > 0, then node 0 is a valid node according to
the above checking, is there a function to check if a node is

Also, I am not sure if I understand "nothing should really care".
Does it means a device still can be a numa that is missing, just
have some performance degradation?

>>> descripton. Why do we even care about NUMA_NO_NODE? E.g. the page
>>> allocator interprets NUMA_NO_NODE as the closest node with a memory.
>>> And by closest it really means to the CPU which is performing the
>>> allocation.
>> Yes, I should have mentioned that in the commit log.
>> I mentioned the below in the RFC, but somehow deleted when sending
>> V1:
>> "There may be explicit handling out there relying on NUMA_NO_NODE,
>> like in nvme_probe()."
> This code, and other doing similar things, is very likely bogus. Just
> look at what the code does. It takes the node affinity from the dev and
> uses it for an allocation. So far so good. But it tries to be clever
> and special cases NUMA_NO_NODE to be first_node. So now the allocator
> has used a proper fallback to the nearest node with memory for the
> current CPU that is executing the code while dev will point to a
> first_node which might be a completely different one. See the
> discrepancy?

Do you mean let kzalloc_node handle the NUMA_NO_NODE case, if node
id is NUMA_NO_NODE, kzalloc_node handles it as numa_mem_id().

If yes, above makes more sense.