Re: [PATCH 6/6] sched/numa: Complete scanning of inactive VMAs when there is no alternative

From: Raghavendra K T
Date: Fri Feb 23 2024 - 23:50:51 EST


On 10/10/2023 5:10 PM, Raghavendra K T wrote:
On 10/10/2023 2:53 PM, Ingo Molnar wrote:

* Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote:

[...]
Both the number of PTE updates and hint faults is dramatically
increased. While this is superficially unfortunate, it represents
ranges that were simply skipped without the patch. As a result
of the scanning and hinting faults, many more pages were also
migrated but as the time to completion is reduced, the overhead
is offset by the gain.

Nice! I've applied your series to tip:sched/core with a few non-functional
edits to comment/changelog formatting/clarity.

Btw., was any previous analysis done on the size of the pids_active[] hash
and the hash collision rate?


Hello Ingo,

I did complete the hash collision experiment you asked long back, But did not report soon (Year end time) . Sorry for that.

Here is the summary:

Context:
========
Currently in VMA scanning we hash the PID value into 6bit value
so that we can use 8 Byte long variable to record which PID had accessed
VMA to optimize scanning (HASH32 method) suggested by PeterZ.

functions used: hash32(PID, ilog2(BITS_PER_LONG))


Alternate was to directly use last 6 bits of PID (PID_6BIT method).

Current experiment evaluates how the distribution or collision looks like.

Experiment:
Main thread
- Allocates large memory.
- Creates n thread that that sweeps allocated memory for fixed iterations
(n = 8,16,32,64)

(All these threads are created without delay)

Note down hash32 value for the threads generated from trace.

Note down the PIDs of the threads and extract 6 bits.

Generate a hashvalue-frequency list.

Notes:
1) 8 Thread experiment will have 8 thread + 1 main thread hashvalue so on

2) When we have large number of threads some time all the thread may not
get the chance to scan VMA and hence total count may be less.

Observations:
===============
1) PID_6BIT generates hashvalues which are crowded.

2) HASH32 generates a very well distributed hash values (as expected).

3) There are no collisions when total threads created is less than 64
in both the cases.

4) When number of Threads created = 64 we see more collision in HASH32
method. But false positives did not seem to be an issue from the experiments
so far. Also number of collisions are not that high too.

I think we got lucky in PID_6BIT case.

Overall hash32 service the intended purpose.

(Ingo, I have experimented with extending total PID info stored from 64 - 128
on larger systems, will post it separately with the patch)

==================
iter0/8-thread
==================

PID_6BIT HASH32
(value-freq) (value-freq)

0 - 1 5 - 1
1 - 1 14 - 1
2 - 1 20 - 1
3 - 1 29 - 1
4 - 1 35 - 1
5 - 1 44 - 1
56 - 1 52 - 1
62 - 1 54 - 1
63 - 1 59 - 1

==================
iter0/16-thread
==================
0 - 1 3 - 1
1 - 1 9 - 1
2 - 1 12 - 1
3 - 1 14 - 1
4 - 1 18 - 1
5 - 1 24 - 1
6 - 1 27 - 1
7 - 1 33 - 1
8 - 1 37 - 1
9 - 1 39 - 1
10 - 1 42 - 1
11 - 1 48 - 1
59 - 1 52 - 1
60 - 1 54 - 1
61 - 1 58 - 1
62 - 1 61 - 1
63 - 1 63 - 1

==================
iter0/32-thread
==================
0 - 1 0 - 1
1 - 1 2 - 1
2 - 1 4 - 1
3 - 1 5 - 1
4 - 1 8 - 1
5 - 1 11 - 1
6 - 1 13 - 1
7 - 1 15 - 1
8 - 1 17 - 1
9 - 1 19 - 1
10 - 1 20 - 1
11 - 1 23 - 1
12 - 1 24 - 1
13 - 1 26 - 1
14 - 1 28 - 1
15 - 1 30 - 1
16 - 1 32 - 1
48 - 1 34 - 1
49 - 1 36 - 1
50 - 1 38 - 1
51 - 1 39 - 1
52 - 1 41 - 1
53 - 1 44 - 1
54 - 1 45 - 1
55 - 1 47 - 1
56 - 1 48 - 1
57 - 1 51 - 1
58 - 1 53 - 1
59 - 1 54 - 1
60 - 1 56 - 1
61 - 1 59 - 1
62 - 1 60 - 1
63 - 1 62 - 1

==================
iter0/64-thread
==================
0 - 1 0 - 1
1 - 1 1 - 1
2 - 1 2 - 1
3 - 1 4 - 1
4 - 1 6 - 1
5 - 1 7 - 1
6 - 1 8 - 1
7 - 1 9 - 1
8 - 1 10 - 1
9 - 1 12 - 1
10 - 1 13 - 1
11 - 1 15 - 1
12 - 1 16 - 1
15 - 1 17 - 2
16 - 1 19 - 1
18 - 1 20 - 1
20 - 1 21 - 1
22 - 1 22 - 1
23 - 1 23 - 1
24 - 1 25 - 2
25 - 2 27 - 1
26 - 1 29 - 1
27 - 1 30 - 1
28 - 1 31 - 1
29 - 1 32 - 1
30 - 1 33 - 1
31 - 1 34 - 1
32 - 1 35 - 1
33 - 1 36 - 1
34 - 1 37 - 1
35 - 1 40 - 2
36 - 1 41 - 1
37 - 1 42 - 1
38 - 1 43 - 1
39 - 1 44 - 1
40 - 1 45 - 1
41 - 1 46 - 1
42 - 1 47 - 1
43 - 1 48 - 1
44 - 1 49 - 1
45 - 1 50 - 1
48 - 1 53 - 2
49 - 1 55 - 1
50 - 1 56 - 2
51 - 1 57 - 1
52 - 1 58 - 1
53 - 1 59 - 1
54 - 1 61 - 2
55 - 1 62 - 1
56 - 1 63 - 1
57 - 1
60 - 1
61 - 1
62 - 1
63 - 1

==================
iter1/8-thread
==================
53 - 1 8 - 1
55 - 1 17 - 1
56 - 1 23 - 1
57 - 1 33 - 1
58 - 1 38 - 1
59 - 1 48 - 1
60 - 1 53 - 1
61 - 1 57 - 1
62 - 1 63 - 1

==================
iter1/16-thread
==================
4 - 1 0 - 1
5 - 1 6 - 1
7 - 1 9 - 1
8 - 1 15 - 1
9 - 1 25 - 1
10 - 1 30 - 1
11 - 1 36 - 1
12 - 1 40 - 1
13 - 1 43 - 1
14 - 1 45 - 1
15 - 1 49 - 1
16 - 1 55 - 1
18 - 1 58 - 1
20 - 1 61 - 1

==================
iter1/32-thread
==================
27 - 1 1 - 1
28 - 1 3 - 1
29 - 1 5 - 1
30 - 1 7 - 1
31 - 1 9 - 1
32 - 1 11 - 1
33 - 1 12 - 1
34 - 1 14 - 1
35 - 1 17 - 1
36 - 1 18 - 1
37 - 1 20 - 1
38 - 1 22 - 1
39 - 1 24 - 1
40 - 1 26 - 1
41 - 1 27 - 1
42 - 1 29 - 1
43 - 1 32 - 1
44 - 1 33 - 1
45 - 1 35 - 1
46 - 1 37 - 1
47 - 1 39 - 1
48 - 1 41 - 1
49 - 1 42 - 1
50 - 1 45 - 1
51 - 1 47 - 1
52 - 1 48 - 1
53 - 1 50 - 1
54 - 1 52 - 1
55 - 1 54 - 1
56 - 1 56 - 1
57 - 1 58 - 1
58 - 1 60 - 1
59 - 1 63 - 1

==================
iter1/64-thread
==================
0 - 1 0 - 1
1 - 1 1 - 1
2 - 1 2 - 1
3 - 1 3 - 1
4 - 1 4 - 2
5 - 1 6 - 1
6 - 1 7 - 2
7 - 1 8 - 1
10 - 1 9 - 1
12 - 1 10 - 1
14 - 1 12 - 1
15 - 1 13 - 1
16 - 2 14 - 1
18 - 1 15 - 1
19 - 1 16 - 1
20 - 1 17 - 1
21 - 1 19 - 1
22 - 1 20 - 1
23 - 1 22 - 1
24 - 1 23 - 1
25 - 1 24 - 1
26 - 1 25 - 1
27 - 1 26 - 1
28 - 1 27 - 1
31 - 1 30 - 1
33 - 1 31 - 1
34 - 1 32 - 2
35 - 1 34 - 1
36 - 1 35 - 1
37 - 1 36 - 1
38 - 1 37 - 1
39 - 1 40 - 1
40 - 1 41 - 1
41 - 1 42 - 1
42 - 1 44 - 1
43 - 1 45 - 1
44 - 1 46 - 1
45 - 1 47 - 1
46 - 1 48 - 1
48 - 1 49 - 1
49 - 1 50 - 1
50 - 1 51 - 1
51 - 1 52 - 1
52 - 1 55 - 1
54 - 1 57 - 1
55 - 1 58 - 1
56 - 1 59 - 1
57 - 1 60 - 1
58 - 1 62 - 1
59 - 1 63 - 1
60 - 1
62 - 1

Thanks and Regards
- Raghu