Re: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed
From: Gu Zheng
Date: Mon Mar 30 2015 - 06:08:16 EST
Hi Kame-san,
On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote:
> On 2015/03/27 0:18, Tejun Heo wrote:
>> Hello,
>>
>> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote:
>>> wq generates the numa affinity (pool->node) for all the possible cpu's
>>> per cpu workqueue at init stage, that means the affinity of currently un-present
>>> ones' may be incorrect, so we need to update the pool->node for the new added cpu
>>> to the correct node when preparing online, otherwise it will try to create worker
>>> on invalid node if node hotplug occurred.
>>
>> If the mapping is gonna be static once the cpus show up, any chance we
>> can initialize that for all possible cpus during boot?
>>
>
> I think the kernel can define all possible
>
> cpuid <-> lapicid <-> pxm <-> nodeid
>
> mapping at boot with using firmware table information.
Could you explain more?
>
> One concern is current x86 logic for memory-less node v.s. memory hotplug.
> (as I explained before)
>
> My idea is
> step1. build all possible mapping at boot cpuid <-> apicid <-> pxm <-> node id at boot.
>
> But this may be overwritten by x86's memory less node logic. So,
> step2. check node is online or not before calling kmalloc. If offline, use -1.
> rather than updating workqueue's attribute.
>
> Thanks,
> -Kame
>
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/