Re: [RFC 0/6] the big khugepaged redesign

From: Vlastimil Babka
Date: Thu Mar 05 2015 - 11:30:26 EST


On 02/24/2015 11:32 AM, Vlastimil Babka wrote:
> On 02/23/2015 11:56 PM, Andrew Morton wrote:
>> On Mon, 23 Feb 2015 14:46:43 -0800 Davidlohr Bueso <dave@xxxxxxxxxxxx> wrote:
>>
>>> On Mon, 2015-02-23 at 13:58 +0100, Vlastimil Babka wrote:
>>>> Recently, there was concern expressed (e.g. [1]) whether the quite aggressive
>>>> THP allocation attempts on page faults are a good performance trade-off.
>>>>
>>>> - THP allocations add to page fault latency, as high-order allocations are
>>>> notoriously expensive. Page allocation slowpath now does extra checks for
>>>> GFP_TRANSHUGE && !PF_KTHREAD to avoid the more expensive synchronous
>>>> compaction for user page faults. But even async compaction can be expensive.
>>>> - During the first page fault in a 2MB range we cannot predict how much of the
>>>> range will be actually accessed - we can theoretically waste as much as 511
>>>> worth of pages [2]. Or, the pages in the range might be accessed from CPUs
>>>> from different NUMA nodes and while base pages could be all local, THP could
>>>> be remote to all but one CPU. The cost of remote accesses due to this false
>>>> sharing would be higher than any savings on the TLB.
>>>> - The interaction with memcg are also problematic [1].
>>>>
>>>> Now I don't have any hard data to show how big these problems are, and I
>>>> expect we will discuss this on LSF/MM (and hope somebody has such data [3]).
>>>> But it's certain that e.g. SAP recommends to disable THPs [4] for their apps
>>>> for performance reasons.
>>>
>>> There are plenty of examples of this, ie for Oracle:
>>>
>>> https://blogs.oracle.com/linux/entry/performance_issues_with_transparent_huge
>>
>> hm, five months ago and I don't recall seeing any followup to this.
>
> Actually it's year + five months, but nevertheless...
>
>> Does anyone know what's happening?

So I think that post was actually about THP support enabled in .config slowing
down hugetlbfs, and found a followup post here
https://blogs.oracle.com/linuxkernel/entry/performance_impact_of_transparent_huge and
that was after all solved in 3.12. Sasha also mentioned that split PTL patchset
helped as well, and the degradation in IOPS due to THP enabled is now limited to
5%, and possibly the refcounting redesign could help.

That however means the workload is based on hugetlbfs and shouldn't trigger THP
page fault activity, which is the aim of this patchset. Some more googling made
me recall that last LSF/MM, postgresql people mentioned THP issues and pointed
at compaction. See http://lwn.net/Articles/591723/ That's exactly where this
patchset should help, but I obviously won't be able to measure this before LSF/MM...

I'm CCing the psql guys from last year LSF/MM - do you have any insight about
psql performance with THPs enabled/disabled on recent kernels, where e.g.
compaction is no longer synchronous for THP page faults?

Thanks,
Vlastimil
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/