Re: [PATCH v8 1/7] sched/fair: Provide u64 read for 32-bits arch helper
From: Dietmar Eggemann
Date: Thu May 05 2022 - 14:33:30 EST
On 29/04/2022 16:11, Vincent Donnefort wrote:
> Introducing macro helpers u64_u32_{store,load}() to factorize lockless
> accesses to u64 variables for 32-bits architectures.
>
> Users are for now cfs_rq.min_vruntime and sched_avg.last_update_time. To
> accommodate the later where the copy lies outside of the structure
> (cfs_rq.last_udpate_time_copy instead of sched_avg.last_update_time_copy),
> use the _copy() version of those helpers.
>
> Those new helpers encapsulate smp_rmb() and smp_wmb() synchronization and
> therefore, have a small penalty in set_task_rq_fair() and init_cfs_rq().
... but obviously only on 32bit machines. And for set_task_rq_fair() we
now do one smp_rmb per cfs_rq (prev and next), like we do one smp_wmb()
per cfs_rq in update_cfs_rq_load_avg().
[...]
> @@ -3786,8 +3770,9 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
> decayed |= __update_load_avg_cfs_rq(now, cfs_rq);
>
> #ifndef CONFIG_64BIT
Can we not get rid of this last CONFIG_64BIT here?
> - smp_wmb();
> - cfs_rq->load_last_update_time_copy = sa->last_update_time;
> + u64_u32_store_copy(sa->last_update_time,
> + cfs_rq->last_update_time_copy,
> + sa->last_update_time);
(sa->last_update_time = sa->last_update_time); should dissolve on 64bit.
> #endif
[...]
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>