Re: [PATCH] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions
From: Dan Williams
Date: Fri Apr 07 2017 - 19:52:07 EST
On Fri, Apr 7, 2017 at 10:41 AM, Kani, Toshimitsu <toshi.kani@xxxxxxx> wrote:
> On Thu, 2017-04-06 at 13:59 -0700, Dan Williams wrote:
>> Before we rework the "pmem api" to stop abusing __copy_user_nocache()
>> for memcpy_to_pmem() we need to fix cases where we may strand dirty
>> data in the cpu cache. The problem occurs when copy_from_iter_pmem()
>> is used for arbitrary data transfers from userspace. There is no
>> guarantee that these transfers, performed by dax_iomap_actor(), will
>> have aligned destinations or aligned transfer lengths. Backstop the
>> usage __copy_user_nocache() with explicit cache management in these
>> unaligned cases.
>>
>> Yes, copy_from_iter_pmem() is now too big for an inline, but
>> addressing that is saved for a later patch that moves the entirety of
>> the "pmem api" into the pmem driver directly.
>
> The change looks good to me. Should we also avoid cache flushing in
> the case of size=4B & dest aligned by 4B?
Yes, since you fixed the 4B aligned case we should skip cache flushing
in that case. I'll send a v2.