Hi!Yes. Let me change it to Share (IOMMU) Domain MDev, SDMdev:)
WarpDrive is a common user space accelerator framework. Its main componentspimdev is really unfortunate name. It looks like it has something to do with SPI, but
in Kernel is called spimdev, Share Parent IOMMU Mediated Device. It exposes
it does not.
DPDK:https://www.dpdk.org/about/+++ b/Documentation/warpdrive/warpdrive.rstWhat is DPDK? ODP?
@@ -0,0 +1,153 @@
+Introduction of WarpDrive
+=========================
+
+*WarpDrive* is a general accelerator framework built on top of vfio.
+It can be taken as a light weight virtual function, which you can use without
+*SR-IOV* like facility and can be shared among multiple processes.
+
+It can be used as the quick channel for accelerators, network adaptors or
+other hardware in user space. It can make some implementation simpler. E.g.
+you can reuse most of the *netdev* driver and just share some ring buffer to
+the user space driver for *DPDK* or *ODP*. Or you can combine the RSA
+accelerator with the *netdev* in the user space as a Web reversed proxy, etc.
But I think the reference [1] has explained this.
+How does it workVFIO? VF? HVF?
+================
+
+*WarpDrive* takes the Hardware Accelerator as a heterogeneous processor which
+can share some load for the CPU:
+
+.. image:: wd.svg
+ :alt: This is a .svg image, if your browser cannot show it,
+ try to download and view it locally
+
+So it provides the capability to the user application to:
+
+1. Send request to the hardware
+2. Share memory with the application and other accelerators
+
+These requirements can be fulfilled by VFIO if the accelerator can serve each
+application with a separated Virtual Function. But a *SR-IOV* like VF (we will
+call it *HVF* hereinafter) design is too heavy for the accelerator which
+service thousands of processes.
Also "gup" might be worth spelling out.
Will refine the doc in next RFC, hope it will help.
+References
+==========
+.. [1] Accroding to the comment in in mm/gup.c, The *gup* is only safe within
+ a syscall. Because it can only keep the physical memory in place
+ without making sure the VMA will always point to it. Maybe we should
+ raise the VM_PINNED patchset (see
+ https://lists.gt.net/linux/kernel/1931993) again to solve this probl
I went through the docs, but I still don't know what it does.
Pavel