Re: [PATCH net-next 8/8] net: macb: add Tx zero-copy AF_XDP support
From: Maxime Chevallier
Date: Fri Mar 06 2026 - 12:54:41 EST
Hi,
On 06/03/2026 18:18, Théo Lebrun wrote:
> Hello!
>
> On Fri Mar 6, 2026 at 1:48 PM CET, Maxime Chevallier wrote:
>> On 04/03/2026 19:24, Théo Lebrun wrote:
>>> Add a new buffer type (to `enum macb_tx_buff_type`). Near the end of
>>> macb_tx_complete(), we go and read the XSK buffers using
>>> xsk_tx_peek_release_desc_batch() and append those buffers to our Tx
>>> ring.
>>>
>>> Additionally, in macb_tx_complete(), we signal to the XSK subsystem
>>> number of bytes completed and conditionally mark the need_wakeup
>>> flag.
>>>
>>> Lastly, we update XSK wakeup by writing the TCOMP bit in the per-queue
>>> IMR register, to ensure NAPI scheduling will take place.
>>>
>>> Signed-off-by: Théo Lebrun <theo.lebrun@xxxxxxxxxxx>
>>> ---
>>
>> [...]
>>
>>> +static void macb_xdp_xmit_zc(struct macb *bp, unsigned int queue_index, int budget)
>>> +{
>>> + struct macb_queue *queue = &bp->queues[queue_index];
>>> + struct xsk_buff_pool *xsk = queue->xsk_pool;
>>> + dma_addr_t mapping;
>>> + u32 slot_available;
>>> + size_t bytes = 0;
>>> + u32 batch;
>>> +
>>> + guard(spinlock_irqsave)(&queue->tx_ptr_lock);
>>> +
>>> + /* This is a hard error, log it. */
>>> + slot_available = CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size);
>>> + if (slot_available < 1) {
>>> + netif_stop_subqueue(bp->dev, queue_index);
>>> + netdev_dbg(bp->dev, "tx_head = %u, tx_tail = %u\n",
>>> + queue->tx_head, queue->tx_tail);
>>> + return;
>>> + }
>>> +
>>> + batch = min_t(u32, slot_available, budget);
>>> + batch = xsk_tx_peek_release_desc_batch(xsk, batch);
>>> + if (!batch)
>>> + return;
>>> +
>>> + for (u32 i = 0; i < batch; i++) {
>>> + struct xdp_desc *desc = &xsk->tx_descs[i];
>>> +
>>> + mapping = xsk_buff_raw_get_dma(xsk, desc->addr);
>>> + xsk_buff_raw_dma_sync_for_device(xsk, mapping, desc->len);
>>> +
>>> + macb_xdp_submit_buff(bp, queue_index, (struct macb_tx_buff){
>>> + .ptr = NULL,
>>> + .mapping = mapping,
>>> + .size = desc->len,
>>> + .mapped_as_page = false,
>>> + .type = MACB_TYPE_XSK,
>>> + });
>>> +
>>> + bytes += desc->len;
>>> + }
>>> +
>>> + /* Make newly initialized descriptor visible to hardware */
>>> + wmb();
>>> + spin_lock(&bp->lock);
>>> + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
>>> + spin_unlock(&bp->lock);
>>
>> this lock is also taken in interrupt context, this should probably use a
>> irqsave/restore variant. Now, there are a few other parts of this driver
>> that use a plain spin_lock() call and except for the paths that actually
>> run in interrupt context, they don't seem correct to me :(
>
> I almost sent a reply agreeing with you, but actually here is the
> exhaustive `spin_lock(&bp->lock)` list:
>
> # Function Context
> ------------------------------------------
> 1 gem_wol_interrupt() irq
> 2 macb_interrupt() irq
> 3 macb_wol_interrupt() irq
> 4 macb_tx_error_task() workqueue/user
> 5 macb_tx_restart() napi/softirq
> 6 macb_xdp_xmit_zc() napi/softirq
> 7 macb_start_xmit() user
> 8 macb_xdp_submit_frame() user
>
> And all contexts are safe because it always is this sequence in non-IRQ
> contexts (#4-8):
>
> spin_lock_irqsave(&queue->tx_ptr_lock, flags);
> spin_lock(&bp->lock);
> spin_unlock(&bp->lock);
> spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
Is it because of the guard statement ?
guard(spinlock_irqsave)(&queue->tx_ptr_lock);
It really doesn't make it obvious that this is how it plays out :(
>
> So bp->tx_ptr_lock always wraps bp->lock and does the local CPU IRQ
> disabling.
>
> (I also checked we don't risk ABBA deadlock, and we don't: all code
> acquires bp->tx_ptr_lock THEN bp->lock.)
>
> However, there is still a bug in the code you quoted: setting
> BIT(TSTART) is done twice by macb_xdp_xmit_zc():
> - once in the helper function macb_xdp_submit_buff() and,
> - once in its own body (code you quoted)
> This is fixed for V2!
great :)
Maxime