Re: [PATCH] oom: handle overflow in mem_cgroup_out_of_memory()
From: Johannes Weiner
Date: Wed Jan 26 2011 - 13:30:42 EST
On Wed, Jan 26, 2011 at 09:33:09AM -0800, Greg Thelen wrote:
> Johannes Weiner <hannes@xxxxxxxxxxx> writes:
> > On Wed, Jan 26, 2011 at 12:29:15AM -0800, Greg Thelen wrote:
> >> mem_cgroup_get_limit() returns a byte limit as a unsigned 64 bit value,
> >> which is converted to a page count by mem_cgroup_out_of_memory(). Prior
> >> to this patch the conversion could overflow on 32 bit platforms
> >> yielding a limit of zero.
> > Balbir: It can truncate, because the conversion shrinks the required
> > bits of this 64-bit number by only PAGE_SHIFT (12). Trying to store
> > the resulting up to 52 significant bits in a 32-bit integer will cut
> > up to 20 significant bits off.
> >> Signed-off-by: Greg Thelen <gthelen@xxxxxxxxxx>
> >> ---
> >> mm/oom_kill.c | 2 +-
> >> 1 files changed, 1 insertions(+), 1 deletions(-)
> >> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> >> index 7dcca55..3fcac51 100644
> >> --- a/mm/oom_kill.c
> >> +++ b/mm/oom_kill.c
> >> @@ -538,7 +538,7 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
> >> struct task_struct *p;
> >> check_panic_on_oom(CONSTRAINT_MEMCG, gfp_mask, 0, NULL);
> >> - limit = mem_cgroup_get_limit(mem) >> PAGE_SHIFT;
> >> + limit = min(mem_cgroup_get_limit(mem) >> PAGE_SHIFT, (u64)ULONG_MAX);
> > I would much prefer using min_t(u64, ...). To make it really, really
> > explicit that this is 64-bit arithmetic. But that is just me, no
> > correctness issue.
> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> I agree that min_t() is clearer. Does the following look better?
Sweet, thank you Greg!
> Author: Greg Thelen <gthelen@xxxxxxxxxx>
> Date: Wed Jan 26 00:05:59 2011 -0800
> oom: handle truncation in mem_cgroup_out_of_memory()
> mem_cgroup_get_limit() returns a byte limit as an unsigned 64 bit value.
> mem_cgroup_out_of_memory() converts this byte limit to an unsigned long
> page count. Prior to this patch, the 32 bit version of
> mem_cgroup_out_of_memory() would silently truncate the most significant
> 20 bits from byte limit when constructing the limit as a page count.
> For byte limits with the lowest 44 bits set to zero, this truncation
> would compute a page limit of zero.
> This patch checks for such large byte limits that cannot be converted to
> page counts without loosing information. In such situations, where a 32
> bit page counter is too small to represent the corresponding byte count,
> select a maximal page count.
> Signed-off-by: Greg Thelen <gthelen@xxxxxxxxxx>
Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
That being said, does this have any practical impact at all? I mean,
this code runs when the cgroup limit is breached. But if the number
of allowed pages (not bytes!) can not fit into 32 bits, it means you
have a group of processes using more than 16T. On a 32-bit machine.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/