Hi Antonio,
On Wed, Mar 26, 2025 at 5:41 PM Antonio Quartulli <antonio@xxxxxxxxxxx> wrote:
+/* Get the next packet ID for xmit */
+static inline int ovpn_pktid_xmit_next(struct ovpn_pktid_xmit *pid, u32 *pktid)
+{
+ const s64 seq_num = atomic64_fetch_add_unless(&pid->seq_num, 1,
+ 0x100000000LL);
+ /* when the 32bit space is over, we return an error because the packet
+ * ID is used to create the cipher IV and we do not want to reuse the
+ * same value more than once
+ */
+ if (unlikely(seq_num == 0x100000000LL))
+ return -ERANGE;
You may use a 32-bit atomic_t, instead of checking if it equals
0x1_00000000, check if it wraparounds to zero.
Additionally, you don't need full memory ordering as you just want an
incrementing value:
int seq_num = atomic_fetch_inc_relaxed(&pid->seq_num);
if (unlikely(!seq_num))
...
But then if we have concurrent invocations of ovpn_pktid_xmit_next()
only the first one will error out on wrap-around, while the others will
return no error (seq_num becomes > 0) and will allow the packets to go
through.
This is not what we want.
Got it. You could replace it with
atomic_fetch_add_unless(&pid->seq_num, 1, 0) and check if it wraps
around to zero.
However, what about the opposite scenario? If multiple threads
concurrently invoke ovpn_pktid_xmit_next() and all detect the
wraparound condition, could this lead to simultaneous calls to
ovpn_crypto_kill_key() and ovpn_nl_key_swap_notify()?