Re: [jlayton:mgtime 5/13] inode.c:undefined reference to `__invalid_cmpxchg_size'
From: Jeff Layton
Date: Tue Jul 09 2024 - 10:23:31 EST
On Tue, 2024-07-09 at 16:16 +0200, Arnd Bergmann wrote:
> On Tue, Jul 9, 2024, at 15:45, Geert Uytterhoeven wrote:
> > On Tue, Jul 9, 2024 at 1:58 PM Jeff Layton <jlayton@xxxxxxxxxx>
> > wrote:
> > > I've been getting some of these warning emails from the KTR. I
> > > think
> > > this is in reference to this patch, which adds a 64-bit
> > > try_cmpxchg in
> > > the timestamp handling code:
> > >
> > >
> > > https://lore.kernel.org/linux-fsdevel/20240708-mgtime-v4-0-a0f3c6fb57f3@xxxxxxxxxx/
> > >
> > > On m68k, there is a prototype for __invalid_cmpxchg_size, but no
> > > actual
> > > function, AFAICT. Should that be defined somewhere, or is this a
> > > deliberate way to force a build break in this case?
> >
> > It's a deliberate way to break the build.
> >
> > > More to the point though: do I need to do anything special for
> > > m86k
> > > here (or for other arches that can't do a native 64-bit cmpxchg)?
> >
> > 64-bit cmpxchg() is only guaranteed to exist on 64-bit platforms.
> > See also
> > https://elixir.bootlin.com/linux/latest/source/include/asm-generic/cmpxchg.h#L62
> >
> > I think you can use arch_cmpxchg64(), though.
>
> arch_cmpxchg64() is an internal helper provided by some
> architectures. Driver code should use cmpxchg64() for
> the explicitly 64-bit sized atomic operation.
>
> I'm fairly sure we still don't provide this across all
> 32-bit architectures though: on architectures that have
> 64-bit atomics (i686, armv6k, ...) these can be provided
> architecture specific code, and on non-SMP kernels they
> can use the generic fallback through
> generic_cmpxchg64_local(), but on SMP architectures without
> native atomics you need a Kconfig dependency to turn off
> the particular code.
>
I think the simplest solution is to make the floor value I'm tracking
be an atomic64_t. That looks like it should smooth over the differences
between arches. I'm testing a patch to do that now.
Thanks!
--
Jeff Layton <jlayton@xxxxxxxxxx>