Re: [PATCH] hugetlbpage.txt: correct overcommit caveat [Was Re:[BUG]:2.6.25-rc7 memory leak with hugepages.]

From: Nishanth Aravamudan
Date: Fri Apr 04 2008 - 14:04:58 EST


On 04.04.2008 [10:31:25 -0700], Nishanth Aravamudan wrote:
> On 04.04.2008 [18:16:38 +0100], Andy Whitcroft wrote:
> > On Thu, Apr 03, 2008 at 08:40:41PM -0700, Nish Aravamudan wrote:
> >
> > > Hrm, fio is using SHM_HUGETLB. Does ipcs indicate maybe fio is not
> > > cleaning up the shared memory segment? FWIW, it seems like each run is
> > > using 400 hugepages in the SHM_HUGETLB segment, and then when you try
> > > to force the pool to shrink, it converts those 800 (since you ran fio
> > > twice) hugepages from static pool pages to dynamic (or overcommit)
> > > pages.
> > >
> > > On another note, it is odd that we're using the dynamic pool, when it
> > > is initially disabled...I'll have to think about that.
> > >
> > > I'll try and look at this later this evening or early tomorrow.
> >
> > Yes that is an expected result. We have no way to force the pool to
> > shrink when pages are in-use. When a request is made to redoce the pool
> > below the number of in-use pages, we move the pages to surplus. While
> > this does temporarily violate the overcommit cap, it does provide the
> > most utility as those pages will be returned to the buddy at the
> > earliest oppotunity.
> >
> > I suspect the documenation could do with a little clarification.
>
>
> As shown by Gurudas Pai recently, we can put hugepages into the surplus
> state (by echo 0 > /proc/sys/vm/nr_hugepages), even when
> /proc/sys/vm/nr_overcommit_hugepages is 0. This is actually correct, to
> allow the original goal (shrink the static pool to 0) to succeed when it
> is possible for it two (we are converting hugepages to surplus because

s/when it is possible for it two//

Will fix in my copy.

Thanks,
Nish

--
Nishanth Aravamudan <nacc@xxxxxxxxxx>
IBM Linux Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/