On Thu, Aug 25, 2022 at 09:01:18PM -0400, Waiman Long wrote:
@@ -2722,6 +2734,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flagI'm not at all sure about those.
complete = true;
}
+ swap_user_cpus_ptr(p, puser_mask);
task_rq_unlock(rq, p, rf);
if (push_task) {
@@ -2793,6 +2806,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
if (flags & SCA_MIGRATE_ENABLE)
p->migration_flags &= ~MDF_PUSH;
+ swap_user_cpus_ptr(p, puser_mask);
task_rq_unlock(rq, p, rf);
if (!stop_pending) {
@@ -2813,6 +2827,8 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
complete = true;
}
}
+
+ swap_user_cpus_ptr(p, puser_mask);
task_rq_unlock(rq, p, rf);
if (complete)
Would it not be much simpler to keep the update of cpus_mask and
cpus_user_mask together, always ensuring that cpus_user_mask is a strict
superset of cpus_mask ? That is, set_cpus_allowed_common() seems like
the right place to me.
I'm thinking this also means blowing away user_mask when we do a full
reset of the cpus_mask when we do an affnity break.