On Sun, May 30, 2021 at 11:06:27AM -0400, Tianyu Lan wrote:
+ if (hv_isolation_type_snp()) {
+ pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
+ GFP_KERNEL);
+ for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
+ pfns[i] = virt_to_hvpfn(net_device->recv_buf + i * HV_HYP_PAGE_SIZE) +
+ (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
+
+ vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
+ kfree(pfns);
+ if (!vaddr)
+ goto cleanup;
+ net_device->recv_original_buf = net_device->recv_buf;
+ net_device->recv_buf = vaddr;
+ }
This probably wnats a helper to make the thing more readable. But who
came up with this fucked up communication protocol where the host needs
to map random pfns into a contigous range? Sometime I really have to
wonder what crack the hyper-v people take when comparing this to the
relatively sane approach others take.
+ for (i = 0; i < page_count; i++)
+ dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
+ packet->dma_range[i].mapping_size,
+ DMA_TO_DEVICE);
+
+ kfree(packet->dma_range);
Any reason this isn't simply using a struct scatterlist?
+ for (i = 0; i < page_count; i++) {
+ char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
+ + pb[i].offset);
+ u32 len = pb[i].len;
+
+ dma = dma_map_single(&hv_dev->device, src, len,
+ DMA_TO_DEVICE);
dma_map_single can only be used on page baked memory, and if this is
using page backed memory you wouldn't need to do thee phys_to_virt
tricks. Can someone explain the mess here in more detail?
struct rndis_device *dev = nvdev->extension;
struct rndis_request *request = NULL;
+ struct hv_device *hv_dev = ((struct net_device_context *)
+ netdev_priv(ndev))->device_ctx;
Why not use a net_device_context local variable instead of this cast
galore?