Re: [PATCH RFC v11 12/12] dmaengine: qcom: bam_dma: add support for BAM locking

From: Bartosz Golaszewski

Date: Mon Mar 09 2026 - 13:14:24 EST


On Sat, 7 Mar 2026 21:33:22 +0100, Vinod Koul <vkoul@xxxxxxxxxx> said:
> On 04-03-26, 16:27, Bartosz Golaszewski wrote:
>> On Wed, Mar 4, 2026 at 3:53 PM Vinod Koul <vkoul@xxxxxxxxxx> wrote:
>> >
>> > On 02-03-26, 16:57, Bartosz Golaszewski wrote:
>> > > Add support for BAM pipe locking. To that end: when starting the DMA on
>> > > an RX channel - wrap the already issued descriptors with additional
>> > > command descriptors performing dummy writes to the base register
>> > > supplied by the client via dmaengine_slave_config() (if any) alongside
>> > > the lock/unlock HW flags.
>> > >
>> > > Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@xxxxxxxxxxxxxxxx>
>>
>> [snip]
>>
>> > > +static struct bam_async_desc *
>> > > +bam_make_lock_desc(struct bam_chan *bchan, struct scatterlist *sg,
>> > > + struct bam_cmd_element *ce, unsigned int flag)
>> > > +{
>> > > + struct dma_chan *chan = &bchan->vc.chan;
>> > > + struct bam_async_desc *async_desc;
>> > > + struct bam_desc_hw *desc;
>> > > + struct virt_dma_desc *vd;
>> > > + struct virt_dma_chan *vc;
>> > > + unsigned int mapped;
>> > > + dma_cookie_t cookie;
>> > > + int ret;
>> > > +
>> > > + async_desc = kzalloc_flex(*async_desc, desc, 1, GFP_NOWAIT);
>> > > + if (!async_desc) {
>> > > + dev_err(bchan->bdev->dev, "failed to allocate the BAM lock descriptor\n");
>> > > + return NULL;
>> > > + }
>> > > +
>> > > + async_desc->num_desc = 1;
>> > > + async_desc->curr_desc = async_desc->desc;
>> > > + async_desc->dir = DMA_MEM_TO_DEV;
>> > > +
>> > > + desc = async_desc->desc;
>> > > +
>> > > + bam_prep_ce_le32(ce, bchan->slave.dst_addr, BAM_WRITE_COMMAND, 0);
>> > > + sg_set_buf(sg, ce, sizeof(*ce));
>> > > +
>> > > + mapped = dma_map_sg_attrs(chan->slave, sg, 1, DMA_TO_DEVICE, DMA_PREP_CMD);
>> > > + if (!mapped) {
>> > > + kfree(async_desc);
>> > > + return NULL;
>> > > + }
>> > > +
>> > > + desc->flags |= cpu_to_le16(DESC_FLAG_CMD | flag);
>> > > + desc->addr = sg_dma_address(sg);
>> > > + desc->size = sizeof(struct bam_cmd_element);
>> > > +
>> > > + vc = &bchan->vc;
>> > > + vd = &async_desc->vd;
>> > > +
>> > > + dma_async_tx_descriptor_init(&vd->tx, &vc->chan);
>> > > + vd->tx.flags = DMA_PREP_CMD;
>> > > + vd->tx.desc_free = vchan_tx_desc_free;
>> > > + vd->tx_result.result = DMA_TRANS_NOERROR;
>> > > + vd->tx_result.residue = 0;
>> > > +
>> > > + cookie = dma_cookie_assign(&vd->tx);
>> > > + ret = dma_submit_error(cookie);
>> >
>> > I am not sure I understand this.
>> >
>> > At start you add a descriptor in the queue, ideally which should be
>> > queued after the existing descriptors are completed!
>> >
>> > Also I thought you want to append Pipe cmd to descriptors, why not do
>> > this while preparing the descriptors and add the pipe cmd and start and
>> > end of the sequence when you prepare... This was you dont need to create
>> > a cookie like this
>> >
>>
>> Client (in this case - crypto engine) can call
>> dmaengine_prep_slave_sg() multiple times adding several logical
>> descriptors which themselves can have several hardware descriptors. We
>> want to lock the channel before issuing the first queued descriptor
>> (for crypto: typically data descriptor) and unlock it once the final
>> descriptor is processed (typically command descriptor). To that end:
>> we insert the dummy command descriptor with the lock flag at the head
>> of the queue and the one with the unlock flag at the tail - "wrapping"
>> the existing queue with lock/unlock operations.
>
> Why not do this per prep call submitted to the engine. It would be
> simpler to just add lock and unlock to the start and end of transaction.
>

Becuase then we'd have:

[LOCK] [DATA] [UNLOCK] [LOCK] [CMD] [UNLOCK]

while what we want is:

[LOCK] [DATA] [CMD] [UNLOCK]

Bartosz