[LSF/MM TOPIC][LSF/MM ATTEND] Enabling Peer-to-Peer DMAs between PCIe devices
From: Stephen Bates
Date: Mon Dec 12 2016 - 13:25:59 EST
Hi
I'd like to discuss the topic of how best to enable DMAs between PCIe
devices in the Linux kernel.
There have been many attempts to add to the kernel the ability to DMA
between two PCIe devices. However, to date, none of these have been
accepted. However as PCIe devices like NICs, NVMe SSDs and GPGPUs continue
to get faster the desire to move data directly between these devices (as
opposed to having to using a temporary buffer in system memory) is
increasing. Out of tree solutions like GPU-Direct are one illustration of
the popularity of this functionality. A recent discussion on this topic
provides a good summary of where things stand [1].
I would like to propose a session at LFS/MM to discuss some of the
different use cases for these P2P DMAs and also to discuss the pros and
cons of these approaches. The desire would be to try and form a consensus
on how best to move forward to an upstreamable solution to this problem.
In addition I would also be interested in participating in the following
topics:
* Anything related to PMEM and DAX.
* Integrating the block-layer polling capability into file-systems.
* New feature integration into the NVMe driver (e.g. fabrics, CMBs, IO
tags etc.)
Cheers
Stephen
[1] http://marc.info/?l=linux-pci&m=147976059431355&w=2 (and subsequent
thread).