Re: [PATCH net-next 5/7] net/mlx5: E-Switch, block representors during reconfiguration

From: Mark Bloch

Date: Tue Apr 14 2026 - 03:32:26 EST




On 14/04/2026 1:22, Jakub Kicinski wrote:
> On Thu, 9 Apr 2026 14:55:48 +0300 Tariq Toukan wrote:
>> A spinlock is out because the protected work can sleep (RDMA ops,
>> devcom, netdev callbacks). A mutex won't work either: esw_mode_change()
>> has to drop the guard mid-flight so mlx5_rescan_drivers_locked() can
>> reload mlx5_ib, which calls back into mlx5_eswitch_register_vport_reps()
>> on the same thread. Beyond that, any real lock would create an ABBA
>> cycle: the LAG side holds the LAG lock when it calls reps_block(), and
>> the mlx5_ib side holds RDMA locks when it calls register_vport_reps(),
>> and those two subsystems talk to each other. The atomic CAS loop avoids
>> all of this - no lock ordering, no sleep restrictions, and the owner
>> can drop the guard and let a nested caller win the next transition
>> before reclaiming it.
>
> You gotta explain to me how a busy loop waiting for a bit to go
> to "UNBLOCKED" state is anything else than a homegrown lock :S

It is indeed lock like in the sense that it serializes progress, but the
main reason for using atomics here is that I need a "wait until state
changes" mechanism. I could have implemented it with a spinlock, for
example:

+static void mlx5_esw_mark_reps(struct mlx5_eswitch *esw,
+ enum mlx5_esw_offloads_rep_type_state old,
+ enum mlx5_esw_offloads_rep_type_state new)
+{
+again:
+ spin_lock(&esw->offloads.reps_conf_lock);
+
+ if (esw->offloads.reps_conf_state == old) {
+ esw->offloads.reps_conf_state = new;
+ } else {
+ spin_unlock(&esw->offloads.reps_conf_lock);
+ goto again;
+ }
+
+ spin_unlock(&esw->offloads.reps_conf_lock);
+}

but this effectively turns the spinlock into a busy-wait loop, which
felt a bit odd to me. That said, if you think the spinlock based
approach is preferable here, I can switch to that.

>
> Also what purpose does the atomic_cond_read_relaxed() serve?
> I haven't seen it being used before.

I've decide to use for a few reasons:
- It uses READ_ONCE(), and I don’t need acquire semantics at that
point since the actual state transition is done with
atomic_cmpxchg().

- The common implementation includes cpu_relax(), so it avoids a tight
spin loop.

- On some architectures (e.g., arm64) it may map to more efficient
wait-for-change instructions. In practice I didn't test on arm64
but looking at the kernel code it has the logic for that (see:
__cmpwait_case_##sz in arch/arm64/include/asm/cmpxchg.h)

Mark