Re: [PATCH v3] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions
From: Kani, Toshimitsu
Date: Tue Apr 11 2017 - 11:15:28 EST
On Mon, 2017-04-10 at 17:35 -0700, Dan Williams wrote:
> Before we rework the "pmem api" to stop abusing __copy_user_nocache()
> for memcpy_to_pmem() we need to fix cases where we may strand dirty
> data in the cpu cache. The problem occurs when copy_from_iter_pmem()
> is used for arbitrary data transfers from userspace. There is no
> guarantee that these transfers, performed by dax_iomap_actor(), will
> have aligned destinations or aligned transfer lengths. Backstop the
> usage __copy_user_nocache() with explicit cache management in these
> unaligned cases.
>
> Yes, copy_from_iter_pmem() is now too big for an inline, but
> addressing that is saved for a later patch that moves the entirety of
> the "pmem api" into the pmem driver directly.
>
> Fixes: 5de490daec8b ("pmem: add copy_from_iter_pmem() and
> clear_pmem()")
> Cc: <stable@xxxxxxxxxxxxxxx>
> Cc: <x86@xxxxxxxxxx>
> Cc: Jan Kara <jack@xxxxxxx>
> Cc: Jeff Moyer <jmoyer@xxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Christoph Hellwig <hch@xxxxxx>
> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
> Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Matthew Wilcox <mawilcox@xxxxxxxxxxxxx>
> Cc: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx>
> Signed-off-by: Toshi Kani <toshi.kani@xxxxxxx>
> Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx>
> ---
> Changes in v3:
> * match the implementation to the notes at the top of
> Â __copy_user_nocache (Toshi)
>
> * Switch to using the IS_ALIGNED() macro to make alignment checks
> easier to read and harder to get wrong like they were in v2. (Toshi,
> Dan)
Thanks Dan! It looks good.
-Toshi