Re: [PATCH v2 5/7] nvme: set discard_granularity from NPDG/NPDA

From: Christoph Hellwig

Date: Tue Feb 24 2026 - 09:34:23 EST


On Fri, Feb 20, 2026 at 08:33:00PM -0700, Caleb Sander Mateos wrote:
> + if (ctrl->dmrsl && ctrl->dmrsl <= nvme_sect_to_lba(ns->head, UINT_MAX))
> + lim->max_hw_discard_sectors =
> + nvme_lba_to_sect(ns->head, ctrl->dmrsl);
> + else if (ctrl->oncs & NVME_CTRL_ONCS_DSM)
> + lim->max_hw_discard_sectors = UINT_MAX;
> + else
> + lim->max_hw_discard_sectors = 0;
> +
> + if (optperf & 0x2 && nvm && nvm->npdgl)
> + npdg = le32_to_cpu(nvm->npdgl);
> + else if (optperf & 0x1)
> + npdg = (u32)le16_to_cpu(id->npdg) + 1;
> + if (optperf & 0x2 && nvm && nvm->npdal)
> + npda = le32_to_cpu(nvm->npdal);
> + else if (optperf)
> + npda = (u32)le16_to_cpu(id->npda) + 1;

This code looks like black magic. This is mostly a fault of the spec for
using the weird matrix with unnamed values and the weird "0based" values.
Please add a comment referencing the figure in the spec that this
decodes. And maybe also a from0based helper mirroring to0based in nvmet.