Re: [PATCH v3 08/14] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm

From: Jan Kara
Date: Thu Jun 15 2017 - 04:11:31 EST


On Wed 14-06-17 09:49:29, Dan Williams wrote:
> On Wed, Jun 14, 2017 at 3:54 AM, Jan Kara <jack@xxxxxxx> wrote:
> >> -/**
> >> - * arch_wb_cache_pmem - write back a cache range with CLWB
> >> - * @vaddr: virtual start address
> >> - * @size: number of bytes to write back
> >> - *
> >> - * Write back a cache range using the CLWB (cache line write back)
> >> - * instruction. Note that @size is internally rounded up to be cache
> >> - * line size aligned.
> >> - */
> >> static inline void arch_wb_cache_pmem(void *addr, size_t size)
> >> {
> >> - u16 x86_clflush_size = boot_cpu_data.x86_clflush_size;
> >> - unsigned long clflush_mask = x86_clflush_size - 1;
> >> - void *vend = addr + size;
> >> - void *p;
> >> -
> >> - for (p = (void *)((unsigned long)addr & ~clflush_mask);
> >> - p < vend; p += x86_clflush_size)
> >> - clwb(p);
> >> + clean_cache_range(addr,size);
> >> }
> >
> > So this will make compilation break on 32-bit x86 as it does not define
> > clean_cache_range(). Do we somewhere force we are on x86_64 when pmem is
> > enabled?
>
> Yes, this is enforced by:
>
> select ARCH_HAS_PMEM_API if X86_64
>
> ...in arch/x86/Kconfig. We fallback to a dummy arch_wb_cache_pmem()
> implementation and emit this warning for !ARCH_HAS_PMEM_API archs:
>
> "nd_pmem namespace0.0: unable to guarantee persistence of writes"

Aha, right. Feel free to add:

Reviewed-by: Jan Kara <jack@xxxxxxx>

Honza

--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR