Hi Christoph:What is blocking us from making the Hyper-V drivers use the DMA API's? They
Thanks a lot for your review. There are some reasons.
1) Vmbus drivers don't use DMA API now.
will be a null-op generally, when there is no bounce buffer support needed.
2) Hyper-V Vmbus channel ring buffer already play bounce bufferHow does this make a difference here?
role for most vmbus drivers. Just two kinds of packets from
netvsc/storvsc are uncovered.
3) In AMD SEV-SNP based Hyper-V guest, the access physical addressThere are alternative implementations of swiotlb on top of the core swiotlb
of shared memory should be bounce buffer memory physical address plus
with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
called virtual top of memory(vTom) in AMD spec and works as a watermark.
So it needs to ioremap/memremap the associated physical address above
the share memory boundary before accessing them. swiotlb_bounce() uses
low end physical address to access bounce buffer and this doesn't work
in this senario. If something wrong, please help me correct me.
API's. One option is to have Hyper-V specific swiotlb wrapper DMA API's with
the custom logic above.
Thanks.I agree with Christoph's comment that in principle, this should be handled using
On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
This should be handled by the DMA mapping layer, just like for native
SEV support.
the DMA API's