Re: [Xen-devel] [PATCH v3 1/1] xen/time: do not decrease steal time after live migration on xen

From: Juergen Gross
Date: Fri Oct 27 2017 - 03:31:18 EST


On 27/10/17 09:16, Dongli Zhang wrote:
> Hi Boris,
>
> On 10/25/2017 11:12 PM, Boris Ostrovsky wrote:
>> On 10/25/2017 02:45 AM, Dongli Zhang wrote:
>>> After guest live migration on xen, steal time in /proc/stat
>>> (cpustat[CPUTIME_STEAL]) might decrease because steal returned by
>>> xen_steal_lock() might be less than this_rq()->prev_steal_time which is
>>> derived from previous return value of xen_steal_clock().
>>>
>>> For instance, steal time of each vcpu is 335 before live migration.
>>>
>>> cpu 198 0 368 200064 1962 0 0 1340 0 0
>>> cpu0 38 0 81 50063 492 0 0 335 0 0
>>> cpu1 65 0 97 49763 634 0 0 335 0 0
>>> cpu2 38 0 81 50098 462 0 0 335 0 0
>>> cpu3 56 0 107 50138 374 0 0 335 0 0
>>>
>>> After live migration, steal time is reduced to 312.
>>>
>>> cpu 200 0 370 200330 1971 0 0 1248 0 0
>>> cpu0 38 0 82 50123 500 0 0 312 0 0
>>> cpu1 65 0 97 49832 634 0 0 312 0 0
>>> cpu2 39 0 82 50167 462 0 0 312 0 0
>>> cpu3 56 0 107 50207 374 0 0 312 0 0
>>>
>>> Since runstate times are cumulative and cleared during xen live migration
>>> by xen hypervisor, the idea of this patch is to accumulate runstate times
>>> to global percpu variables before live migration suspend. Once guest VM is
>>> resumed, xen_get_runstate_snapshot_cpu() would always return the sum of new
>>> runstate times and previously accumulated times stored in global percpu
>>> variables.
>>>
>>> Similar and more severe issue would impact prior linux 4.8-4.10 as
>>> discussed by Michael Las at
>>> https://0xstubs.org/debugging-a-flaky-cpu-steal-time-counter-on-a-paravirtualized-xen-guest,
>>> which would overflow steal time and lead to 100% st usage in top command
>>> for linux 4.8-4.10. A backport of this patch would fix that issue.
>>>
>>> References: https://0xstubs.org/debugging-a-flaky-cpu-steal-time-counter-on-a-paravirtualized-xen-guest
>>> Signed-off-by: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
>>>
>>> ---
>>> Changed since v1:
>>> * relocate modification to xen_get_runstate_snapshot_cpu
>>>
>>> Changed since v2:
>>> * accumulate runstate times before live migration
>>>
>>> ---
>>> drivers/xen/manage.c | 1 +
>>> drivers/xen/time.c | 19 +++++++++++++++++++
>>> include/xen/xen-ops.h | 1 +
>>> 3 files changed, 21 insertions(+)
>>>
>>> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
>>> index c425d03..9aa2955 100644
>>> --- a/drivers/xen/manage.c
>>> +++ b/drivers/xen/manage.c
>>> @@ -72,6 +72,7 @@ static int xen_suspend(void *data)
>>> }
>>>
>>> gnttab_suspend();
>>> + xen_accumulate_runstate_time();
>>> xen_arch_pre_suspend();
>>>
>>> /*
>>> diff --git a/drivers/xen/time.c b/drivers/xen/time.c
>>> index ac5f23f..6df3f82 100644
>>> --- a/drivers/xen/time.c
>>> +++ b/drivers/xen/time.c
>>> @@ -19,6 +19,8 @@
>>> /* runstate info updated by Xen */
>>> static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
>>>
>>> +static DEFINE_PER_CPU(u64[4], old_runstate_time);
>>> +
>>> /* return an consistent snapshot of 64-bit time/counter value */
>>> static u64 get64(const u64 *p)
>>> {
>>> @@ -52,6 +54,7 @@ static void xen_get_runstate_snapshot_cpu(struct vcpu_runstate_info *res,
>>> {
>>> u64 state_time;
>>> struct vcpu_runstate_info *state;
>>> + int i;
>>>
>>> BUG_ON(preemptible());
>>>
>>> @@ -64,6 +67,22 @@ static void xen_get_runstate_snapshot_cpu(struct vcpu_runstate_info *res,
>>> rmb(); /* Hypervisor might update data. */
>>> } while (get64(&state->state_entry_time) != state_time ||
>>> (state_time & XEN_RUNSTATE_UPDATE));
>>> +
>>> + for (i = 0; i < 4; i++)
>>> + res->time[i] += per_cpu(old_runstate_time, cpu)[i];
>>> +}
>>> +
>>> +void xen_accumulate_runstate_time(void)
>>> +{
>>> + struct vcpu_runstate_info state;
>>> + int cpu;
>>> +
>>> + for_each_possible_cpu(cpu) {
>>> + xen_get_runstate_snapshot_cpu(&state, cpu);
>>> + memcpy(per_cpu(old_runstate_time, cpu),
>>> + state.time,
>>> + 4 * sizeof(u64));
>>
>> sizeof(old_runstate_time). (I think this should work for per_cpu variables)
>>
>>> + }
>>
>> Hmm.. This may not perform as intended if we are merely checkpointing
>> (or pausing) the guest (i.e. if HYPERVISOR_suspend() returns 1). We will
>> double-account for the last interval that the guest has run.
>>
>> I'd rather not have yet another per-cpu variable but I can't think of
>> anything else. Perhaps you or others can come up with something better.
>
> I have 3 options so far.
>
> The 1st option is to another per-cpu variable while you do not like it.
>
> The 2nd option is to borrow from what do_stolen_accounting() used to do. Compute
> the delta of current and previous time and do nothing if delta is less than 0.
> The drawback of this option is guest might wait for the new time to catch up
> with previous time.

This could be a rather long time. I don't think this is the way to go.

> The 3rd option is to check the return value of HYPERVISOR_suspend() to different
> if this is a migration of checkpointing. As we will double-account the runstate
> time for checkpointing, why not just divide it by 2? The drawback of this option
> is the result is not accurate as we divide the incremental (time before and
> after checkpointing) by 2.

And it is wrong if you do multiple migrations.

> Would you please let me know which option we prefer?

Perhaps option 4:

Allocate a buffer at suspend time for the times to add up and do the
correction after suspend and free the buffer again.


Juergen