Re: [PATCH] dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID

From: Zi Yan
Date: Tue Jan 16 2018 - 15:01:11 EST


On 12 Jan 2018, at 11:56, Vinod Koul wrote:

On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
From: Zi Yan <zi.yan@xxxxxxxxxxxxxx>

When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
to 0, if the unmap pool is maximally used. This triggers BUG() when
struct dmaengine_unmap_data is freed. Use u16 to fix the problem.

Signed-off-by: Zi Yan <zi.yan@xxxxxxxxxxxxxx>
---
include/linux/dmaengine.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index f838764993eb..861be5cab1df 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -470,7 +470,11 @@ typedef void (*dma_async_tx_callback_result)(void *dma_async_param,
const struct dmaengine_result *result);

struct dmaengine_unmap_data {
+#if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
+ u16 map_cnt;
+#else
u8 map_cnt;
+#endif
u8 to_cnt;
u8 from_cnt;
u8 bidi_cnt;

Would that cause adverse performance, the data structure is not aligned
anymore. Dan was that a consideration while adding this?


It will be only two more cache misses (one for map the data, the other for unmap the data)
for each DMA engine operation, no matter what data size is. And there is no impact on
the actual DMA transfers. So the impact should be minimal.

â
Best Regards,
Yan Zi