Re: oom-killer

From: Michal Hocko
Date: Tue Aug 06 2019 - 11:07:39 EST


On Tue 06-08-19 20:24:03, Pankaj Suryawanshi wrote:
> On Tue, 6 Aug, 2019, 1:46 AM Michal Hocko, <mhocko@xxxxxxxxxx> wrote:
> >
> > On Mon 05-08-19 21:04:53, Pankaj Suryawanshi wrote:
> > > On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > > >
> > > > On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > > > > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > > > > >> [ 727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P O 4.14.65 #606
> > > > > > [...]
> > > > > >> [ 728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > > > > >> [ 728.037569] r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [ 728.045392] r4:d5737080
> > > > > >> [ 728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>] (__alloc_pages_nodemask+0x1178/0x124c)
> > > > > >> [ 728.056798] r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > > > > >> [ 728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > > > > >> [ 728.071764] r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > > > > >> [ 728.079587] r4:d1063c00
> > > > > >> [ 728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > > > > >> [ 728.090034] r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > > > > >> [ 728.097857] r4:00808111
> > > > > >
> > > > > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > > > > not all that important.
> > > > > > [...]
> > > > > >> [ 728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > > > > >> [ 728.287402] lowmem_reserve[]: 0 0 579 579
> > > > > >
> > > > > > So this is the only usable zone and you are close to the min watermark
> > > > > > which means that your system is under a serious memory pressure but not
> > > > > > yet under OOM for order-0 request. The situation is not great though
> > > > >
> > > > > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > > > > /proc/zoneinfo say?
> > >
> > >
> > > What is lowmem_reserve[]: 0 0 579 579 ?
> >
> > This controls how much of memory from a lower zone you might an
> > allocation request for a higher zone consume. E.g. __GFP_HIGHMEM is
> > allowed to use both lowmem and highmem zones. It is preferable to use
> > highmem zone because other requests are not allowed to use it.
> >
> > Please see __zone_watermark_ok for more details.
> >
> >
> > > > This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> > > > reserve here, no?
> > >
> > >
> > > Why only low 1G is accessible by kernel in 32-bit system ?
>
>
> 1G ivirtual or physical memory (I have 2GB of RAM) ?

virtual

> > https://www.kernel.org/doc/gorman/, https://lwn.net/Articles/75174/
> > and many more articles. In very short, the 32b virtual address space
> > is quite small and it has to cover both the users space and the
> > kernel. That is why we do split it into 3G reserved for userspace and 1G
> > for kernel. Kernel can only access its 1G portion directly everything
> > else has to be mapped explicitly (e.g. while data is copied).
> > Thanks Michal.
>
>
> >
> > > My system configuration is :-
> > > 3G/1G - vmsplit
> > > vmalloc = 480M (I think vmalloc size will set your highmem ?)
> >
> > No, vmalloc is part of the 1GB kernel adress space.
>
> I read in one article , vmalloc end is fixed if you increase vmalloc
> size it decrease highmem. ?
> Total = lowmem + (vmalloc + high mem)

As the kernel is using vmalloc area _directly_ then it has to be a part
of the kernel address space - thus reducing the lowmem.
--
Michal Hocko
SUSE Labs