Re: [PATCH] [0/6] HUGETLB memory commitment

From: Keith Owens
Date: Thu Mar 25 2004 - 20:11:02 EST


On Thu, 25 Mar 2004 23:59:21 +0000,
Andy Whitcroft <apw@xxxxxxxxxxxx> wrote:
>--On 25 March 2004 15:51 -0800 Andrew Morton <akpm@xxxxxxxx> wrote:
>
>> I think it's simply:
>>
>> - Make normal overcommit logic skip hugepages completely
>>
>> - Teach the overcommit_memory=2 logic that hugepages are basically
>> "pinned", so subtract them from the arithmetic.
>>
>> And that's it. The hugepages are semantically quite different from normal
>> memory (prefaulted, preallocated, unswappable) and we've deliberately
>> avoided pretending otherwise.
>
>True currently. Though the thread that prompted this was in response to
>the time taken for this prefault and for the wish to fault them.
>
>I'll have a poke about at it and see how small I can make it.

FWIW, lkcd (crash dump) treats hugetlb pages as normal kernel pages and
dumps them, which is pointless and wastes a lot of time. To avoid
dumping these pages in lkcd, I had to add a PG_hugetlb flag. lkcd runs
at the page level, not mm or vma, so VM_hugetlb was not available. In
set_hugetlb_mem_size()

for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {
SetPageReserved(map);
SetPageHugetlb(map);
map++;
}

In dump_base.c, I changed kernel_page(), referenced_page() and
unreferenced_page() to test for PageHugetlb() before PageReserved().

Since you are looking at identifying hugetlb pages, could any other
code benefit from a PG_hugetlb flag?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/