Re: [net-next v36] mctp pcc: Implement MCTP over PCC Transport
From: Adam Young
Date: Sat Apr 04 2026 - 01:11:47 EST
On 4/2/26 21:30, Jeremy Kerr wrote:
Hi Adam,
Sure, but it looks like the messages are essentially lost onOn the latter, could you expand on what happens on close? Does the PCCYes, it could, but since they are held by another subsystem, and there
channel end up calling tx_done() on each pending TX during the channel
free? I'm not familiar with the PCC paths, but it doesn't look like it
(or the mailbox core) has a case to deal with this on free.
Maybe I am missing something, but could this leak skbs?
is no way to access them, it is safer to leak than to free. The Ring
Buffer in the Mailbox layer is not accessable from the MCTP Client.
Additionally, there is no way to force a flush of ring buffer.
free; a re-bind will clear the mailbox channel's msg buf.
Without a mechanism to purge the message queue on free, it looks like
the only leak-less way to handle this is to keep track of the pending
skbs manually.
There is a potential for this kind of leak even if we were to transferNeither of those are good.
the data out of SKB: the two options are to either leave it in the ring
buffer or risk a use-after-free event.
Where is the use-after-free here? Once the pcc_mbox_free_channel()
returns, the pending message buf seems to be forgotten, and so I can't
see how any pending skb gets referenced.
Bringing the link back up would immediately send theSame here - the mbox message queue looks to be reset on client bind, so
remaining skbs and cause them to be free, so they are not frozen in
perpetuity with no chance of being sent.
it appears that they wouldn't get sent?
If you do need to track the skbs yourself, that could be fairly simple:
keep a sk_buff_head of pending skbs, remove from the list on tx
completion, and skb_queue_purge on ndo_close.
OK, I can manually traverse the ring buffer and free the skbuffs. The ring buffer and associated fields are in the mbox_channel, which I have access to.
Whether the backend could handle this is a different story.A cancellation callback would be handy, yeah.
Cheers,
Jeremy