Re: [PATCH 0/3] timer: Improve itimers scalability

From: Hideaki Kimura
Date: Wed Aug 26 2015 - 19:55:16 EST




On 08/26/2015 04:13 PM, Frederic Weisbecker wrote:
On Wed, Aug 26, 2015 at 03:53:26PM -0700, Hideaki Kimura wrote:
Sure, let me elaborate.

Executive summary:
Yes, enabling a process-wide timer in such a large machine is not wise, but
sometimes users/applications cannot avoid it.


The issue was observed actually not in a database itself but in a common
library it links to; gperftools.

The database itself is optimized for many-cores/sockets, so surely it avoids
putting a process-wide timer or other unscalable things. It just links to
libprofiler for an optional feature to profile performance bottleneck only
when the user turns it on. We of course avoid turning the feature on unless
while we debug/tune the database.

However, libprofiler sets the timer even when the client program doesn't
invoke any of its functions: libprofiler does it when the shared library is
loaded. We requested the developer of libprofiler to change the behavior,
but seems like there is a reason to keep that behavior:
https://code.google.com/p/gperftools/issues/detail?id=133

Based on this, I think there are two reasons why we should ameliorate this
issue in kernel layer.


1. In the particular case, it's hard to prevent or even detect the issue in
user space.

We (a team of low-level database and kernel experts) in fact spent huge
amount of time to just figure out what's the bottleneck there because
nothing measurable happens in user space. I pulled out countless hairs.

Also, the user has to de-link the library from the application to prevent
the itimer installation. Imagine a case where the software is proprietary.
It won't fly.


2. This is just one example. There could be many other such
binaries/libraries that do similar things somewhere in a complex software
stack.

Today we haven't heard of many such cases, but people will start hitting it
once 100s~1,000s of cores become common.


After applying this patchset, we have observed that the performance hit
almost completely went away at least for 240 cores. So, it's quite
beneficial in real world.

I can easily imagine that many code incidentally use posix cpu timers when
it's not strictly required. But it doesn't look right to fix the kernel
for that. For this simple reason: posix cpu timers, even after your fix,
should introduce noticeable overhead. All threads of a process with a timer
enqueued in elapse the cputime in a shared atomic variable. Add to that the
overhead of enqueuing the timer, firing it. There is a bunch of scalability
issue there.

I totally agree that this is not a perfect solution. If there are 10x more cores and sockets, just the atomic fetch_add might be too expensive.

However, it's comparatively/realistically the best thing we can do without any drawbacks. We can't magically force all library developers to write the most scalable code always.

My point is: this is a safety net, and a very effective one.

--
Hideaki Kimura
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/