Re: [PATCH 00/36] cputime: Convert core use of cputime_t to nsecs
From: Martin Schwidefsky
Date: Fri Nov 18 2016 - 07:09:12 EST
On Thu, 17 Nov 2016 19:08:07 +0100
Frederic Weisbecker <fweisbec@xxxxxxxxx> wrote:
> I'm sorry for the patchbomb, especially as I usually complain about
> these myself but I don't see any way to split this patchset into
> standalone pieces, none of which would make any sense... All I can do
> is to isolate about 3 cleanup patches.
On first glance the patches look ok-ish, but I am not happy about the
direction this takes.
I can understand the wish to consolidate the common code to a single
format which is nano-seconds. It will have repercussions though.
First the obvious problem, it does not compile for s390:
arch/s390/kernel/vtime.c: In function 'do_account_vtime':
arch/s390/kernel/vtime.c:140:25: error: implicit declaration of function
'cputime_to_nsecs' [-Werror=implicit-function-declaration]
account_user_time(tsk, cputime_to_nsecs(user));
^~~~~~~~~~~~~~~~
arch/s390/kernel/idle.c: In function 'enabled_wait':
arch/s390/kernel/idle.c:46:20: error: implicit declaration of function
'cputime_to_nsecs' [-Werror=implicit-function-declaration]
account_idle_time(cputime_to_nsecs(idle_time));
^~~~~~~~~~~~~~~~
arch/s390/kernel/idle.c: In function 'arch_cpu_idle_time':
arch/s390/kernel/idle.c:100:9: error: implicit declaration of function
'cputime_to_nsec' [-Werror=implicit-function-declaration]
return cputime_to_nsec(idle_enter ? ((idle_exit ?: now) - idle_enter) : 0);
^~~~~~~~~~~~~~~
The error at idle.c:100 is a typo cputime_to_nsec vs cputime_to_nsecs.
The other two could probably be solved with an additional include but the
default cputime_to_nsecs is in include/linux/cputime.h is this:
#ifndef cputime_to_nsecs
# define cputime_to_nsecs(__ct) \
(cputime_to_usecs(__ct) * NSEC_PER_USEC)
#endif
which downgrades the accuracy for s390 from better than nano-seconds
to micro-seconds. Not good. For the s390 cputime format you would have
to do
static inline unsigned long long cputime_to_nsecs(const cputime_t cputime)
{
return ((__force unsigned long long) cputime * 1000) >> 12;
}
But this *example* function has an overflow problem.
> So currently, cputime_t serves the purpose, for s390 and
> powerpc (on CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y), to avoid converting
> arch clock counters to nanosecs or jiffies while accounting cputime.
The cputime_t has several purposes:
1) Allow for different units in the calculations for virtual cpu time.
There are currently three models: jiffies, nano-seconds and the native
TOD clock format for s390 which is a bit better than nano-seconds.
2) Act as a marker in the common code where a virtual cpu time is used.
This is more important than you might think, unfortunately it is very
easy to confuse a wall-clock delta with cpu time.
3) Avoid expensive operations on the fast path to convert the native cpu
time to something else. Instead move the expensive calculation to the
read-out code, e.g. fs/proc.
You patches breaks all three of these purposes. My main gripe is with 3).
> But this comes at the cost of a lot of complexity and uglification
> in the core code to deal with such an opaque type that relies on lots of
> mutators and accessors in order to deal with a random granularity time
> unit that also involve lots of workarounds and likely some performance
> penalties.
Having an opaque type with a set of helper functions is the whole point, no?
And I would not call the generic implementations for jiffies or nano-seconds
complex, these are easy enough to understand. And what are the performance
penalties you are talking about?
> So this patchset proposes to convert most of the cputime_t uses to nsecs.
> In the end it's only used by s390 and powerpc. This all comes at the
> expense of those two archs which then need to perform a cputime_to_nsec()
> conversion everytime they update the cputime to the core. Now I expect
> we can leverage this performance loss with flushing the cputime only on
> ticks so that we accumulate time as cputime_t in between and make the
> conversions more rare.
It is not just one cputime_to_nsec that we would have to add but several.
Three in do_account_vtime and one in vtime_account_irq_enter.
The do_account_vtime function is called once per jiffy and once per task
switch. HZ is usually set to 100 for s390, the conversion once per jiffy
would not be so bad, but the call on the scheduling path *will* hurt.
What is even worse is the vtime_account_irq_enter path, that is call several
times for each *interrupt*, at least two times for an interrupt without
additional processing and four times if a softirq is triggered.
Now it has been proposed to implement lazy accounting to accumulate deltas
and do the expensive conversions only infrequently. This is pretty straight-
forward for account_user_time but to do this for the account_system_time
function is more complicated. The function has to differentiate between
guest/hardirq/softirq and pure system time. We would need to keep sums for
each bucket and provide a separate function to add to each bucket. Like
account_guest_time(), account_hardirq_time(), account_softirq_time() and
account_system_time(). Then it is up to the arch code to sort out the details
and call the accounting code once per jiffy for each of the buckets.
We still have to do the whole thing on each task switch though.
But I am still not happy about the approach. What is the compelling reason
for this change except for the "but it looks ugly"?
--
blue skies,
Martin.
"Reality continues to ruin my life." - Calvin.