RE: [PATCH v9 3/6] iio: adc: ad4691: add triggered buffer support
From: Sabau, Radu bogdan
Date: Thu May 07 2026 - 07:41:53 EST
Addressing Sashiko's review for triggered buffer patch.
> -----Original Message-----
> From: Radu Sabau via B4 Relay <devnull+radu.sabau.analog.com@xxxxxxxxxx>
> Sent: Thursday, April 30, 2026 1:17 PM
...
> +static int ad4691_manual_buffer_preenable(struct iio_dev *indio_dev)
> +{
> + struct ad4691_state *st = iio_priv(indio_dev);
> + unsigned int prev_i, k, i;
> + bool first;
> + int ret;
> +
> + memset(st->scan_xfers, 0, sizeof(st->scan_xfers));
> + memset(st->scan_tx, 0, sizeof(st->scan_tx));
> +
> + spi_message_init(&st->scan_msg);
> +
> + first = true;
> + prev_i = 0;
> + k = 0;
> + iio_for_each_active_channel(indio_dev, i) {
> + st->scan_tx[k] = cpu_to_be16(AD4691_ADC_CHAN(i));
> + st->scan_xfers[k].tx_buf = &st->scan_tx[k];
> + /*
> + * The pipeline means xfer[0] receives the residual from the
> + * previous sequence, not a valid sample for channel i. Point
> + * it at vals[i] anyway; xfer[1] (or the NOOP when only one
> + * channel is active) will overwrite that slot with the real
> + * result, so no separate dummy buffer is needed.
> + */
> + if (first) {
> + st->scan_xfers[k].rx_buf = &st->vals[i];
> + first = false;
> + } else {
> + st->scan_xfers[k].rx_buf = &st->vals[prev_i];
> + }
"The IIO subsystem expects data pushed to the buffer to be densely packed
according to the active channels in the scan mask.
If only a subset of channels are enabled, does assigning the rx_buf pointer
directly to absolute array indices at &st->vals[i] leave holes in the buffer?
When iio_push_to_buffers_with_ts() is called, this might cause it to read
uninitialized memory instead of the expected samples."
I would say there is no change needed. Writing to &st->vals[scan_index] and
passing the full array to iio_push_to_buffers_with_ts() is the standard IIO kfifo
pattern: the core demultiplexes by reading data[scan_index * storagebits/8]
for each active channel; holes at inactive indices are silently ignored.
The same pattern is used in ad4695, ad_sigma_delta, and others. The
pipeline residual in the first manual-mode transfer is overwritten by the
subsequent transfer before the scan is pushed, as the comment explains.
> + st->scan_xfers[k].len = sizeof(__be16);
> + st->scan_xfers[k].cs_change = 1;
> + spi_message_add_tail(&st->scan_xfers[k], &st->scan_msg);
> + prev_i = i;
> + k++;
> + }
> +
...
> +static int ad4691_cnv_burst_buffer_preenable(struct iio_dev *indio_dev)
> +{
> + struct ad4691_state *st = iio_priv(indio_dev);
> + unsigned int k, i;
> + int ret;
> +
> + memset(st->scan_xfers, 0, sizeof(st->scan_xfers));
> + memset(st->scan_tx, 0, sizeof(st->scan_tx));
> +
> + spi_message_init(&st->scan_msg);
> +
> + /*
> + * Each AVG_IN read needs two transfers: a 2-byte address write phase
> + * followed by a 2-byte data read phase. CS toggles between channels
> + * (cs_change=1 on the read phase of all but the last channel).
> + */
> + k = 0;
> + iio_for_each_active_channel(indio_dev, i) {
> + st->scan_tx[k] = cpu_to_be16(0x8000 | AD4691_AVG_IN(i));
> + st->scan_xfers[2 * k].tx_buf = &st->scan_tx[k];
> + st->scan_xfers[2 * k].len = sizeof(__be16);
> + spi_message_add_tail(&st->scan_xfers[2 * k], &st-
> >scan_msg);
> + st->scan_xfers[2 * k + 1].rx_buf = &st->vals[i];
> + st->scan_xfers[2 * k + 1].len = sizeof(__be16);
> + st->scan_xfers[2 * k + 1].cs_change = 1;
> + spi_message_add_tail(&st->scan_xfers[2 * k + 1], &st-
> >scan_msg);
> + k++;
> + }
> +
> + st->scan_tx[k] = cpu_to_be16(AD4691_STATE_RESET_REG);
> + st->scan_xfers[2 * k].tx_buf = &st->scan_tx[k];
> + st->scan_xfers[2 * k].len = sizeof(__be16);
> + spi_message_add_tail(&st->scan_xfers[2 * k], &st->scan_msg);
> + st->scan_tx[k + 1] = cpu_to_be16(AD4691_STATE_RESET_ALL << 8);
"Does the iio_for_each_active_channel() loop iterate over the soft timestamp
channel if it is enabled?
If the soft timestamp channel is enabled, k would reach 17. Because
st->scan_tx is sized for 18 elements, does writing to st->scan_tx[k + 1] and
st->scan_xfers[2 * k + 1] overflow the bounds of these arrays?"
Valid defensive concern. With all data channels + timestamp enabled,
the loop iterates scan_index=16 (timestamp), pushing k to 17, and the
state-reset then accesses scan_xfers[34] which is out-of-bounds (array is
sized 34). Fixed by adding a guard to break out of the loop when
i >= num_channels - 1 (the timestamp scan_index), consistent with how
both ad4691_channels and ad4693_channels are laid out.
"Also, the AD4691 auto-decrements register addresses on multi-byte writes.
Could writing a 2-byte payload to the 1-byte AD4691_STATE_RESET_REG (0x181)
inadvertently write 0x00 to the adjacent lower register 0x180
(AD4691_OSC_EN_REG), disabling the internal oscillator?"
Real bug. The two back-to-back 2-byte transfers ([0x01,0x81] then
[0x01,0x00]) are sent with CS continuously asserted, so the chip sees a
4-byte write: 0x01 to STATE_RESET_REG and 0x00 to OSC_EN_REG, disabling
the oscillator mid-scan. Fixed by using a single 4-byte transfer
[addr_hi, addr_lo, STATE_RESET_ALL, OSC_EN=1] in a dedicated
u8 scan_tx_reset[4] field. The fourth byte deliberately re-enables
OSC_EN_REG (0x180) via ADDR_DESCENDING as a side-write, keeping the
oscillator enabled. This same buffer is reused by the offload path in
the following commit (the two paths are mutually exclusive at probe).
> + st->scan_xfers[2 * k + 1].tx_buf = &st->scan_tx[k + 1];
> + st->scan_xfers[2 * k + 1].len = sizeof(__be16);
> + st->scan_xfers[2 * k + 1].cs_change = 1;
> + spi_message_add_tail(&st->scan_xfers[2 * k + 1], &st->scan_msg);
> +
> + ret = spi_optimize_message(st->spi, &st->scan_msg);
> + if (ret)
> + return ret;
> +
> + ret = regmap_write(st->regmap, AD4691_STD_SEQ_CONFIG,
> + bitmap_read(indio_dev->active_scan_mask, 0,
> + iio_get_masklength(indio_dev)));
> + if (ret)
> + goto err_unoptimize;
> +
> + ret = regmap_write(st->regmap, AD4691_ACC_MASK_REG,
> + ~bitmap_read(indio_dev->active_scan_mask, 0,
> + iio_get_masklength(indio_dev)) &
> GENMASK(15, 0));
> + if (ret)
> + goto err_unoptimize;
> +
> + ret = ad4691_enter_conversion_mode(st);
> + if (ret)
> + goto err_unoptimize;
> +
> + ret = ad4691_sampling_enable(st, true);
> + if (ret)
> + goto err_exit_conv;
> +
> + enable_irq(st->irq);
> + return 0;
"Is there a race condition introduced by enabling the PWM and unmasking the
IRQ here?
If a hardware interrupt fires before the IIO core attaches the trigger's poll
function, iio_trigger_poll() drops the event. Will the IRQ handler then call
disable_irq_nosync() without ever running the consumer thread to re-enable it?"
Valid. preenable is called before the IIO core attaches the trigger
poll function; if a DATA_READY IRQ fires in that window, iio_trigger_poll()
is dropped, disable_irq_nosync() disables the IRQ, and enable_irq() is
never called, leaving the IRQ stuck. Although the delay would need to be
very great for this to happen, I moved sampling_enable(true) and
enable_irq() to a new postenable callback which the IIO core calls only
after the trigger poll function is attached.
> +
> +err_exit_conv:
> + ad4691_exit_conversion_mode(st);
> +err_unoptimize:
> + spi_unoptimize_message(&st->scan_msg);
> + return ret;
> +}
> +
> +static int ad4691_cnv_burst_buffer_postdisable(struct iio_dev *indio_dev)
> +{
> + struct ad4691_state *st = iio_priv(indio_dev);
> + int ret;
> +
> + disable_irq(st->irq);
"During buffer teardown, the IIO core detaches the trigger consumer before
invoking the postdisable callback.
If an interrupt fires while the PWM is still running before the call to
ad4691_sampling_enable(), could ad4691_irq() disable the IRQ without a
consumer to re-enable it? Does the subsequent disable_irq() call then increase
the disable depth to 2, permanently disabling the hardware IRQ?"
Valid, fixed as part of the previous concern rework. IRQ disabling and
sampling_enable(false) are now in a new predisable callback (called before
the consumer is detached); postdisable only handles SPI cleanup.
> +
> + ret = ad4691_sampling_enable(st, false);
> + if (ret)
> + return ret;
"If ad4691_sampling_enable() fails, does returning early bypass
spi_unoptimize_message() and leak the DMA mappings allocated by the SPI core
during preenable?"
Valid. If pwm_apply_might_sleep() inside sampling_enable(false) fails,
spi_unoptimize_message() was skipped. Always called now before returning
from postdisable regardless of sampling_enable() result.
> +
> + ret = ad4691_exit_conversion_mode(st);
...
> +static ssize_t sampling_frequency_store(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t len)
> +{
> + struct iio_dev *indio_dev = dev_to_iio_dev(dev);
> + struct ad4691_state *st = iio_priv(indio_dev);
> + int freq, ret;
> +
> + ret = kstrtoint(buf, 10, &freq);
> + if (ret)
> + return ret;
> +
> + IIO_DEV_ACQUIRE_DIRECT_MODE(indio_dev, claim);
> + if (IIO_DEV_ACQUIRE_FAILED(claim))
> + return -EBUSY;
> +
> +
> + ret = ad4691_set_pwm_freq(st, freq);
"If a negative value is passed to kstrtoint(), does ad4691_set_pwm_freq()
calculate a cnv_period_ns of 0 due to signed integer arithmetic rules?
Could this lead to a division by zero regression in sampling_frequency_show()
when it calculates NSEC_PER_SEC / st->cnv_period_ns?"
Valid. Switching sampling_frequency_store to kstrtouint + unsigned int freq so
Negative strings are rejected at parse time before reaching set_pwm_freq.
> + if (ret)
> + return ret;
> +
> + return len;
> +}
> +
> +static IIO_DEVICE_ATTR_RW(sampling_frequency, 0);
> +
> +static const struct iio_dev_attr *ad4691_buffer_attrs[] = {
> + &iio_dev_attr_sampling_frequency,
> + NULL
> +};
> +
> +static irqreturn_t ad4691_irq(int irq, void *private)
> +{
> + struct iio_dev *indio_dev = private;
> + struct ad4691_state *st = iio_priv(indio_dev);
> +
> + iio_trigger_poll(indio_dev->trig);
> + /*
> + * Keep the DATA_READY IRQ disabled until the trigger handler has
> + * finished reading the scan, to prevent a new assertion mid-transfer.
> + * The PWM continues running uninterrupted; the IRQ is re-enabled in
> + * ad4691_trigger_handler once spi_sync completes.
> + *
> + * IRQF_ONESHOT already masks the hardware line during this
> threaded
> + * handler, so disable_irq_nosync here ensures the IRQ stays disabled
> + * even after IRQF_ONESHOT unmasks on return.
> + */
> + disable_irq_nosync(st->irq);
> +
> + return IRQ_HANDLED;
> +}
"Can a concurrency issue occur here between this threaded IRQ handler and the
trigger consumer thread?
Since iio_trigger_poll() wakes up the consumer thread, could the consumer
thread complete the SPI transfer and call enable_irq() before the provider
thread reaches disable_irq_nosync()? This might trigger an unbalanced enable
for IRQ warning and leave the hardware IRQ permanently disabled."
Valid. Moved disable_irq_nosync() before iio_trigger_poll() so the
IRQ is always disabled before the consumer thread gets a chance to call
enable_irq().
> +
> +static const struct iio_trigger_ops ad4691_trigger_ops = {
> + .validate_device = iio_trigger_validate_own_device,
...
> +
> static const struct iio_info ad4691_info = {
> .read_raw = &ad4691_read_raw,
> .write_raw = &ad4691_write_raw,
> @@ -514,6 +910,18 @@ static const struct iio_info ad4691_info = {
> .debugfs_reg_access = &ad4691_reg_access,
> };
"Without a .validate_trigger callback here, can external triggers be attached?
If an external trigger fires, the consumer thread might execute
enable_irq(st->irq) without a corresponding hardware interrupt calling
disable_irq_nosync() first, leading to an unbalanced IRQ enable warning."
Valid. Added .validate_trigger = iio_validate_own_trigger to
ad4691_info to prevent external triggers from causing unbalanced
enable_irq() calls.