Re: [PATCH v2 6/9] s390/module: Use s390_kernel_write() for late relocations
From: Joe Lawrence
Date: Thu Apr 30 2020 - 10:38:33 EST
On Thu, Apr 23, 2020 at 01:10:30PM -0500, Josh Poimboeuf wrote:
> On Thu, Apr 23, 2020 at 09:12:28AM -0500, Josh Poimboeuf wrote:
> > > > this is strange. While I would have expected an exception similar to
> > > > this, it really should have happened on the "sturg" instruction which
> > > > does the DAT-off store in s390_kernel_write(), and certainly not with
> > > > an ID of 0004 (protection). However, in your case, it happens on a
> > > > normal store instruction, with 0004 indicating a protection exception.
> > > >
> > > > This is more like what I would expect e.g. in the case where you do
> > > > _not_ use the s390_kernel_write() function for RO module text patching,
> > > > but rather normal memory access. So I am pretty sure that this is not
> > > > related to the s390_kernel_write(), but some other issue, maybe some
> > > > place left where you still use normal memory access?
> > >
> > > The call trace above also suggests that it is not a late relocation, no?
> > > The path is from KLP module init function through klp_enable_patch. It should
> > > mean that the to-be-patched object is loaded (it must be a module thanks
> > > to a check klp_init_object_loaded(), vmlinux relocations were processed
> > > earlier in apply_relocations()).
> > >
> > > However, the KLP module state here must be COMING, so s390_kernel_write()
> > > should be used. What are we missing?
> >
> > I'm also scratching my head. It _should_ be using s390_kernel_write()
> > based on the module state, but I don't see that on the stack trace.
> >
> > This trace (and Gerald's comment) seem to imply it's using
> > __builtin_memcpy(), which might expected for UNFORMED state.
> >
> > Weird...
>
> Mystery solved:
>
> $ CROSS_COMPILE=s390x-linux-gnu- scripts/faddr2line vmlinux apply_rela+0x16a/0x520
> apply_rela+0x16a/0x520:
> apply_rela at arch/s390/kernel/module.c:336
>
> which corresponds to the following code in apply_rela():
>
>
> case R_390_PLTOFF64: /* 16 bit offset from GOT to PLT. */
> if (info->plt_initialized == 0) {
> unsigned int *ip;
> ip = me->core_layout.base + me->arch.plt_offset +
> info->plt_offset;
> ip[0] = 0x0d10e310; /* basr 1,0 */
> ip[1] = 0x100a0004; /* lg 1,10(1) */
>
>
> Notice how it's writing directly to text... oops.
>
This is more of note for the future, but when/if we add livepatch
support on arm64 we'll need to make the very same adjustment there as
well. See the following pattern:
arch/arm64/kernel/module.c:
reloc_insn_movw()
reloc_insn_imm()
reloc_insn_adrp()
*place = cpu_to_le32(insn);
maybe something like aarch64_insn_patch_text_nosync() could be used
there, I dunno. (It looks like ftrace and jump_labels are using that
interface.)
This is outside the scope of the patchset, but I thought I'd mention it
as I was curious to see how other arches were currently handling their
relocation updates.
-- Joe