Re: [PATCH v5 0/5] DMA mapping changes for SCSI core

From: Damien Le Moal
Date: Mon Jul 11 2022 - 07:22:21 EST


On 7/11/22 16:36, John Garry wrote:
> On 11/07/2022 00:08, Damien Le Moal wrote:
>>> Ah, I think that I misunderstood Damien's question. I thought he was
>>> asking why not keep shost max_sectors at dma_max_mapping_size() and then
>>> init each sdev request queue max hw sectors at dma_opt_mapping_size().
>> I was suggesting the reverse:) Keep the device hard limit
>> (max_hw_sectors) to the max dma mapping and set the soft limit
>> (max_sectors) to the optimal dma mapping size.
>
> Sure, but as I mentioned below, I only see a small % of requests whose
> mapping size exceeds max_sectors but that still causes a big performance
> hit. So that is why I want to set the hard limit as the optimal dma
> mapping size.

How can you possibly end-up with requests larger than max_sectors ? BIO
split is done using this limit, right ? Or is it that request merging is
allowed up to max_hw_sectors even if the resulting request size exceeds
max_sectors ?

>
> Indeed, the IOMMU IOVA caching limit is already the same as default
> max_sectors for the disks in my system - 128Kb for 4k page size.
>
>>
>>> But he seems that you want to know why not have the request queue max
>>> sectors at dma_opt_mapping_size(). The answer is related to meaning of
>>> dma_opt_mapping_size(). If we get any mappings which exceed this size
>>> then it can have a big dma mapping performance hit. So I set max hw
>>> sectors at this ‘opt’ mapping size to ensure that we get no mappings
>>> which exceed this size. Indeed, I think max sectors is 128Kb today for
>>> my host, which would be same as dma_opt_mapping_size() value with an
>>> IOMMU enabled. And I find that only a small % of request size may exceed
>>> this 128kb size, but it still has a big performance impact.
>>>
>
> Thanks,
> John


--
Damien Le Moal
Western Digital Research