Re: [PATCH v3 02/15] dax: increase granularity of dax_clear_blocks() operations
From: Ross Zwisler
Date: Tue Nov 03 2015 - 12:58:09 EST
On Mon, Nov 02, 2015 at 09:31:11PM -0800, Dan Williams wrote:
> On Mon, Nov 2, 2015 at 8:48 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > On Mon, Nov 02, 2015 at 07:27:26PM -0800, Dan Williams wrote:
> >> On Mon, Nov 2, 2015 at 4:51 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> >> > On Sun, Nov 01, 2015 at 11:29:53PM -0500, Dan Williams wrote:
> >> > The zeroing (and the data, for that matter) doesn't need to be
> >> > committed to persistent store until the allocation is written and
> >> > committed to the journal - that will happen with a REQ_FLUSH|REQ_FUA
> >> > write, so it makes sense to deploy the big hammer and delay the
> >> > blocking CPU cache flushes until the last possible moment in cases
> >> > like this.
> >>
> >> In pmem terms that would be a non-temporal memset plus a delayed
> >> wmb_pmem at REQ_FLUSH time. Better to write around the cache than
> >> loop over the dirty-data issuing flushes after the fact. We'll bump
> >> the priority of the non-temporal memset implementation.
> >
> > Why is it better to do two synchronous physical writes to memory
> > within a couple of microseconds of CPU time rather than writing them
> > through the cache and, in most cases, only doing one physical write
> > to memory in a separate context that expects to wait for a flush
> > to complete?
>
> With a switch to non-temporal writes they wouldn't be synchronous,
> although it's doubtful that the subsequent writes after zeroing would
> also hit the store buffer.
>
> If we had a method to flush by physical-cache-way rather than a
> virtual address then it would indeed be better to save up for one
> final flush, but when we need to resort to looping through all the
> virtual addresses that might have touched it gets expensive.
I agree with the idea that we should avoid the "big hammer" flushing in
response to REQ_FLUSH. Here are the steps that are needed to make sure that
something is durable on media with PMEM/DAX:
1) Write, either with non-temporal stores or with stores that use the
processor cache
2) If you wrote using the processor cache, flush or write back the processor
cache
3) wmb_pmem(), synchronizing all non-temporal writes and flushes durably to
media.
PMEM does all I/O using 1 and 3 with non-temporal stores, and mmaps that go to
userspace can used cached writes, so on fsync/msync we do a bunch of flushes
for step 2. In either case I think we should have the PMEM driver just do
step 3, the wmb_pmem(), in response to REQ_FLUSH. This allows the zeroing
code to just do non-temporal writes of zeros, the DAX fsync/msync code to just
do flushes (which is what my patch set already does), and just leave the
wmb_pmem() to the PMEM driver at REQ_FLUSH time.
This makes the burden of REQ_FLUSH bearable for the PMEM driver, allowing us
to avoid looping through potentially terabytes of PMEM on each REQ_FLUSH bio.
This just means that the layers above the PMEM code either need to use
non-temporal writes for their I/Os, or do flushing, which I don't think is too
onerous.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/