Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
From: Huang, Ying
Date: Thu Jan 13 2022 - 07:07:03 EST
Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes:
> On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
>> Hi, Peter,
>>
>> Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes:
>>
>> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
>> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
>> >> for use like normal RAM"), the PMEM could be used as the
>> >> cost-effective volatile memory in separate NUMA nodes. In a typical
>> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
>> >> NUMA node. The CPUs and the DRAM will be put in one logical node,
>> >> while the PMEM will be put in another (faked) logical node.
>> >
>> > So what does a system like that actually look like, SLIT table wise, and
>> > how does that affect init_numa_topology_type() ?
>>
>> The SLIT table is as follows,
>>
>> [000h 0000 4] Signature : "SLIT" [System Locality Information Table]
>> [004h 0004 4] Table Length : 0000042C
>> [008h 0008 1] Revision : 01
>> [009h 0009 1] Checksum : 59
>> [00Ah 0010 6] Oem ID : "INTEL "
>> [010h 0016 8] Oem Table ID : "S2600WF "
>> [018h 0024 4] Oem Revision : 00000001
>> [01Ch 0028 4] Asl Compiler ID : "INTL"
>> [020h 0032 4] Asl Compiler Revision : 20091013
>>
>> [024h 0036 8] Localities : 0000000000000004
>> [02Ch 0044 4] Locality 0 : 0A 15 11 1C
>> [030h 0048 4] Locality 1 : 15 0A 1C 11
>> [034h 0052 4] Locality 2 : 11 1C 0A 1C
>> [038h 0056 4] Locality 3 : 1C 11 1C 0A
>>
>> The `numactl -H` output is as follows,
>>
>> available: 4 nodes (0-3)
>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
>> node 0 size: 64136 MB
>> node 0 free: 5981 MB
>> node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
>> node 1 size: 64466 MB
>> node 1 free: 10415 MB
>> node 2 cpus:
>> node 2 size: 253952 MB
>> node 2 free: 253920 MB
>> node 3 cpus:
>> node 3 size: 253952 MB
>> node 3 free: 253951 MB
>> node distances:
>> node 0 1 2 3
>> 0: 10 21 17 28
>> 1: 21 10 28 17
>> 2: 17 28 10 28
>> 3: 28 17 28 10
>>
>> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
>>
>> The node 0 and node 1 are onlined during boot. While the PMEM node,
>> that is, node 2 and node 3 are onlined later. As in the following dmesg
>> snippet.
>
> But how? sched_init_numa() scans the *whole* SLIT table to determine
> nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
> should find 4 distinct distance values and end up not selecting
> NUMA_DIRECT.
>
> Similarly for the other types it uses for_each_online_node(), which
> would include the pmem nodes once they've been onlined, but I'm thinking
> we explicitly want to skip CPU-less nodes in that iteration.
I used the debug patch as below, and get the log in dmesg as follows,
[ 5.394577][ T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28
I found that I forget another caller of init_numa_topology_type() run
during hotplug. I will add another printk() to show it. Sorry about
that.
Best Regards,
Huang, Ying
-------------------------------8<------------------------------------