Re: [PATCH net-next v4 1/2] net: pse-pd: add devm_pse_poll_helper()

From: Jakub Kicinski

Date: Mon Apr 13 2026 - 18:58:46 EST


This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
net: pse-pd: add devm_pse_poll_helper()

Extract the common event handling loop from pse_isr() into a shared
pse_handle_events() function, and add a generic poll-based alternative
to the IRQ path for PSE controllers that lack interrupt support.

> Work cancellation on teardown is handled via devm_add_action_or_reset()
> to ensure the delayed work is cancelled before poll_notifs is freed
> by devres, avoiding a use-after-free when devm_pse_poll_helper() is
> called after devm_pse_controller_register() (devres LIFO ordering).

Is it safe to rely on this devm registration order to prevent a
use-after-free?

Standard Linux driver initialization typically sets up interrupts and
pollers before device registration to prevent race conditions where an
unready device is exposed to userspace.

If a driver follows this standard ordering by calling the helper before
registration, devres will unregister the controller first. This will
tear down resources in pse_controller_unregister() while the poll work
remains active.

If the worker executes during this race window, could it access the
already-freed pcdev->pi array or push to the destroyed ntf_fifo? Would
it be safer to explicitly cancel asynchronous event sources within
pse_controller_unregister() instead?

> --- a/drivers/net/pse-pd/pse_core.c
> +++ b/drivers/net/pse-pd/pse_core.c
[ ... ]
> +static void pse_handle_events(struct pse_controller_dev *pcdev,
> + unsigned long *notifs,
> + unsigned long notifs_mask)
> {
> - struct pse_controller_dev *pcdev;
> - unsigned long notifs_mask = 0;
> - struct pse_irq_desc *desc;
> - struct pse_irq *h = data;
> - int ret, i;
> -
> - desc = &h->desc;
> - pcdev = h->pcdev;
> -
> - /* Clear notifs mask */
> - memset(h->notifs, 0, pcdev->nr_lines * sizeof(*h->notifs));
> - mutex_lock(&pcdev->lock);
> - ret = desc->map_event(irq, pcdev, h->notifs, &notifs_mask);
> - if (ret || !notifs_mask) {
> - mutex_unlock(&pcdev->lock);
> - return IRQ_NONE;
> - }
> + int i;
>
> for_each_set_bit(i, &notifs_mask, pcdev->nr_lines) {

Can this lead to an out-of-bounds read on the stack if a controller
supports more than BITS_PER_LONG ports?

The notifs_mask variable is passed by value to pse_handle_events(), but
for_each_set_bit() takes its address. If pcdev->nr_lines is greater than
BITS_PER_LONG (e.g., a 48-port switch on a 32-bit architecture), the
macro will read past the function argument on the stack into uninitialized
memory.

[ ... ]
> +static irqreturn_t pse_isr(int irq, void *data)
> +{
> + struct pse_controller_dev *pcdev;
> + unsigned long notifs_mask = 0;
> + struct pse_irq *h = data;
> + int ret;
> +
> + pcdev = h->pcdev;
> +
> + /* Clear notifs mask */
> + memset(h->notifs, 0, pcdev->nr_lines * sizeof(*h->notifs));
> + mutex_lock(&pcdev->lock);
> + ret = h->desc.map_event(irq, pcdev, h->notifs, &notifs_mask);

Could this also result in a stack out-of-bounds write?

If map_event() uses set_bit() to report an event on a port index greater
than or equal to BITS_PER_LONG, it will write past the 4-byte boundary of
the scalar notifs_mask variable, corrupting adjacent stack memory.

[ ... ]
> +static void pse_poll_worker(struct work_struct *work)
> +{
> + struct pse_controller_dev *pcdev =
> + container_of(work, struct pse_controller_dev,
> + poll_work.work);
> + unsigned long notifs_mask = 0;
> + int ret;
> +
> + memset(pcdev->poll_notifs, 0,
> + pcdev->nr_lines * sizeof(*pcdev->poll_notifs));
> + mutex_lock(&pcdev->lock);
> + ret = pcdev->poll_desc.map_event(0, pcdev, pcdev->poll_notifs,
> + &notifs_mask);

Does this share the same out-of-bounds write risk for notifs_mask as the
IRQ handler above?
--
pw-bot: cr