On Tue 04-10-16 15:24:53, Vlastimil Babka wrote:
On 09/30/2016 11:41 PM, Michal Hocko wrote:[...]
Fix this by always priting the nodemask. It is either mempolicy mask
(and non-null) or the one defined by the cpusets.
I wonder if it's helpful to print the cpuset one when that's printed
separately, and seeing both pieces of information (nodemask and cpuset)
unmodified might tell us more. Is it to make it easier to deal with NULL
nodemask? Or to make sure the info gets through pr_warn() and not pr_info()?
I am not sure I understand the question. I wanted to print the nodemask
separatelly in the same line with all other allocation request
parameters like order and gfp mask because that is what the page
allocator got (via policy_nodemask). cpusets builds on top - aka applies
__cpuset_zone_allowed on top of the nodemask. So imho it makes sense to
look at the cpuset as an allocation domain while the mempolicy as a
restriction within this domain.
Does that answer your question?
The new output for
the above oom report would be
PoolThread invoked oom-killer: gfp_mask=0x280da(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_adj=0, oom_score_adj=0
This patch doesn't touch show_mem and the node filtering based on the
cpuset node mask because mempolicy is always a subset of cpusets and
seeing the full cpuset oom context might be helpful for tunning more
specific mempolicies inside cpusets (e.g. when they turn out to be too
restrictive). To prevent from ugly ifdefs the mask is printed even
for !NUMA configurations but this should be OK (a single node will be
printed).
Reported-by: Sellami Abdelkader <abdelkader.sellami@xxxxxxx>
Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
Other than that,
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
Thanks!