Re: [bug, 4.8] /proc/meminfo: counter values are very wrong

From: Mel Gorman
Date: Fri Aug 05 2016 - 07:03:45 EST


On Fri, Aug 05, 2016 at 09:11:10AM +1000, Dave Chinner wrote:
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index fb975cec3518..baa97da3687d 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4064,7 +4064,7 @@ long si_mem_available(void)
> > int lru;
> >
> > for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++)
> > - pages[lru] = global_page_state(NR_LRU_BASE + lru);
> > + pages[lru] = global_node_page_state(NR_LRU_BASE + lru);
> >
> > for_each_zone(zone)
> > wmark_low += zone->watermark[WMARK_LOW];
>
> OK, that makes the /proc accounting match the /sys per-node
> accounting, but the output still looks wrong. I remove files with
> cached pages from the filesystem (i.e. invalidate and free them),
> yet they are apparrently still accounted as being on the
> active/inactive LRU.
>
> Reboot, then run dbench for a minute:
>
> $ sudo mkfs.xfs -f /dev/pmem1
> meta-data=/dev/pmem1 isize=512 agcount=4, agsize=524288 blks
> = sectsz=4096 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=0
> data = bsize=4096 blocks=2097152, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0 ftype=1
> log =internal log bsize=4096 blocks=2560, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
> $ sudo mount /dev/pmem1 /mnt/scratch
> $ sudo dbench -t 60 -D /mnt/scratch/ 16
> dbench version 4.00 - Copyright Andrew Tridgell 1999-2004
>

Is there any chance this is related to pmem1?

I tried reproducing this with the patch applied and free -m over time
looks like this

total used free shared buffers cached
# while [ 1 ]; do free -m | grep Mem; sleep 5; done
Mem: 15878 259 15618 1 12 123
Mem: 15878 274 15603 1 16 131
Mem: 15878 612 15266 1 17 463
Mem: 15878 617 15261 1 18 470
Mem: 15878 613 15265 1 19 463
Mem: 15878 614 15264 1 19 464
Mem: 15878 647 15231 1 20 498
Mem: 15878 616 15262 1 21 465
Mem: 15878 642 15236 1 22 491
Mem: 15878 618 15260 1 23 465
Mem: 15878 619 15259 1 24 466
Mem: 15878 620 15258 1 25 464
Mem: 15878 620 15257 1 26 466
Mem: 15878 622 15256 1 27 464
Mem: 15878 622 15255 1 27 466
Mem: 15878 285 15592 1 28 132
Mem: 15878 285 15592 1 28 132

Used memory before and after the dbench run were roughly similar

--
Mel Gorman
SUSE Labs