Hi,Good point, will remove then
I would try without that, and maybe add a runtime option (moduleSure this actually helps? It's below 4G in guest physical addressYes, you are right here. This is why I wrote about the IOMMU
space, so it can be backed by pages which are actually above 4G in host
physical address space ...
and other conditions. E.g. you can have a device which only
expects 32-bit, but thanks to IOMMU it can access pages above
4GiB seamlessly. So, this is why I *hope* that this code *may* help
such devices. Do you think I don't need that and have to remove?
parameter) later if it turns out some hardware actually needs that.
Devices which can do 32bit DMA only become less and less common these
days.
It is. This is why I am so unsure this is way to goOh, arm. Maybe ask on a arm list then. I know on arm you have to careWell, x86... I am on arm which doesn't define that...set_pages_array_*() ?Yes+ÂÂÂ if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents,Are you using the DMA streaming API as a way to flush the caches?
+ÂÂÂÂÂÂÂÂÂÂÂ DMA_BIDIRECTIONAL)) {
Does this mean that GFP_USER isn't making the buffer coherent?No, it didn't help. I had a question [1] if there are any other better way
to achieve the same, but didn't have any response yet. So, I implemented
it via DMA API which helped.
See arch/x86/include/asm/set_memory.h
about caching a lot more, but that also is where my knowledge ends ...
Using dma_map_sg for cache flushing looks like a sledge hammer approach
to me.
But maybe it is needed to make xen flush the caches (xen guestsI'll try to figure out
have their own dma mapping implementation, right? Or is this different
on arm than on x86?).
cheers,Thank you,
Gerd