[RFC] dma/mapping.c: WARN_ONCE on dma_addressing_limited() being true
From: Balbir Singh
Date: Sat Apr 12 2025 - 05:41:29 EST
In the debug and resolution of an issue involving forced use of bounce
buffers, 7170130e4c72 ("x86/mm/init: Handle the special case of device
private pages in add_pages(), to not increase max_pfn and trigger
dma_addressing_limited() bounce buffers"). It would have been easier
to debug the issue if dma_addressing_limited() had a warning about a
device not being able to address all of memory and thus forcing all
accesses through a bounce buffer. Please see[2].
A warning would have let the user of the system know that in their
particular case, use_dma32 is set due to the addressing limitation
and this would impact performance of the driver in use.
Implement a WARN_ONCE() to point to the potential use of bounce buffers
when we hit the condition. When swiotlb is used,
dma_addressing_limited() is used to determine the size of maximum dma
buffer size in dma_direct_max_mapping_size(). The warning could be
triggered in that check as well.
Link: https://lore.kernel.org/lkml/20250401000752.249348-1-balbirs@xxxxxxxxxx/ [1]
Link: https://lore.kernel.org/lkml/20250310112206.4168-1-spasswolf@xxxxxx/ [2]
Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>
Cc: Robin Murphy <robin.murphy@xxxxxxx>
Cc: "Christian König" <christian.koenig@xxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Kees Cook <kees@xxxxxxxxxx>
Cc: Bjorn Helgaas <bhelgaas@xxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Alex Deucher <alexander.deucher@xxxxxxx>
Cc: Bert Karwatzki <spasswolf@xxxxxx>
Signed-off-by: Balbir Singh <balbirs@xxxxxxxxxx>
---
Testing: Tested lightly with on a virtual machine, I do not have access
to a device where dma_addressing_limited() is true
kernel/dma/mapping.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index cda127027e48..0f0455fa5bc8 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -918,7 +918,7 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
* the system, else %false. Lack of addressing bits is the prime reason for
* bounce buffering, but might not be the only one.
*/
-bool dma_addressing_limited(struct device *dev)
+static bool __dma_addressing_limited(struct device *dev)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
@@ -930,6 +930,17 @@ bool dma_addressing_limited(struct device *dev)
return false;
return !dma_direct_all_ram_mapped(dev);
}
+
+bool dma_addressing_limited(struct device *dev)
+{
+ bool ret = __dma_addressing_limited(dev);
+
+ WARN_ONCE((ret == true),
+ "%s might have lower performance due to bounce buffering",
+ dev_name(dev));
+
+ return ret;
+}
EXPORT_SYMBOL_GPL(dma_addressing_limited);
size_t dma_max_mapping_size(struct device *dev)
--
2.49.0