Re: [GIT PULL] PMEM driver for v4.1

From: Boaz Harrosh
Date: Mon Apr 13 2015 - 08:21:39 EST

On 04/13/2015 01:45 PM, Ingo Molnar wrote:
> * Christoph Hellwig <hch@xxxxxx> wrote:
>> On Mon, Apr 13, 2015 at 11:33:09AM +0200, Ingo Molnar wrote:
>>> Limitations: this is a regular block device, and since the pmem areas
>>> are not struct page backed, they are invisible to the rest of the
>>> system (other than the block IO device), so direct IO to/from pmem
>>> areas, direct mmap() or XIP is not possible yet. The page cache will
>>> also shadow and double buffer pmem contents, etc.
>> Unless you use the DAX support in ext2/4 and soon XFS, in which case
>> we avoid that double buffering when doing read/write and mmap
> Indeed, I missed that DAX support just went upstream in v4.0 - nice!
> DAX may have some other limitations though that comes from not having
> struct page * backing and using VM_MIXEDMAP, the following APIs might
> not work on DAX files:
> - splice

splice works fine. Also I sent a cleanup in this area to Andrew it will
be in for 4.1

> - zero copy O_DIRECT into DAX areas.

DAX is always O_DIRECT.

What does not work is mmap of DAX file and use that pointer in an
O_DIRECT operation of another device. (unless it is a DAX device)

Also mmap of DAX file and RDMA or direct-networking. Will need
a copy.

All this is fixable by applying my page-struct patch for pmem

> - futexes
> - ( AFAICS hugetlbs won't work on DAX mmap()s yet - although with
> the current nocache mapping that's probable the least of the
> performance issues for now. )
> Btw., what's the future design plan here? Enable struct page backing,
> or provide special codepaths for all DAX uses like the special pte
> based approach for mmap()s?

I'm hopping for struct page, 4k pages at first and 2M pages later on,
which needs more work in IO stacks, where I need this most.

> Thanks,
> Ingo


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at