Re: [PATCH] iio: trigger: Fix refcount leak in viio_trigger_alloc() error path
From: Guangshuo Li
Date: Sat Apr 11 2026 - 05:30:59 EST
Hi Dan,
Thank you very much for your review and for pointing this out.
The kernel version on our side is `v6.19-rc8-214-ge7aa57247700`. For
clarity, below is the full `viio_trigger_alloc()` function in our
tree:
```c
struct iio_trigger *viio_trigger_alloc(struct device *parent,
struct module *this_mod,
const char *fmt,
va_list vargs)
{
struct iio_trigger *trig;
int i;
trig = kzalloc(sizeof(*trig), GFP_KERNEL);
if (!trig)
return NULL;
trig->dev.parent = parent;
trig->dev.type = &iio_trig_type;
trig->dev.bus = &iio_bus_type;
device_initialize(&trig->dev);
INIT_WORK(&trig->reenable_work, iio_reenable_work_fn);
mutex_init(&trig->pool_lock);
trig->subirq_base = irq_alloc_descs(-1, 0,
CONFIG_IIO_CONSUMERS_PER_TRIGGER,
0);
if (trig->subirq_base < 0)
goto free_trig;
trig->name = kvasprintf(GFP_KERNEL, fmt, vargs);
if (trig->name == NULL)
goto free_descs;
INIT_LIST_HEAD(&trig->list);
trig->owner = this_mod;
trig->subirq_chip.name = trig->name;
trig->subirq_chip.irq_mask = &iio_trig_subirqmask;
trig->subirq_chip.irq_unmask = &iio_trig_subirqunmask;
for (i = 0; i < CONFIG_IIO_CONSUMERS_PER_TRIGGER; i++) {
irq_set_chip(trig->subirq_base + i, &trig->subirq_chip);
irq_set_handler(trig->subirq_base + i, &handle_simple_irq);
irq_modify_status(trig->subirq_base + i,
IRQ_NOREQUEST | IRQ_NOAUTOEN, IRQ_NOPROBE);
}
return trig;
free_descs:
irq_free_descs(trig->subirq_base, CONFIG_IIO_CONSUMERS_PER_TRIGGER);
free_trig:
kfree(trig);
return NULL;
}
```
So in this version, both error paths are reached after `device_initialize()`.
That was why I thought `put_device(&trig->dev)` would be more
appropriate here than freeing `trig` directly with `kfree()`.
Also, since `irq_alloc_descs()` can return a negative error code, I
thought changing the release-side check to `trig->subirq_base >= 0`
was needed as well.
I may be missing something here, so I would very much appreciate any
correction if my understanding is off.
Best regards,
Guangshuo