Re: [RFC] security: replace indirect calls with static calls
From: Brendan Jackman
Date: Mon Aug 24 2020 - 11:23:06 EST
On Fri, 21 Aug 2020 at 00:46, Casey Schaufler <casey@xxxxxxxxxxxxxxxx> wrote:
>
> On 8/20/2020 9:47 AM, Brendan Jackman wrote:
[...]
> What does NOP really look like?
The NOP is the same as a regular function call but the CALL
instruction is replaced with a NOP instruction. The code that sets up
the call parameters is unchanged, and so is the code that expects to
get the return value in eax or whatever. That means we cannot actually
call the static_calls for NULL slots, we'd get undefined behaviour
(except for void hooks) - this is what Peter is talking about in the
sibling thread.
For this reason, there are _no gaps_ in the callback table. For a
given LSM hook, all the slots after base_slot_idx are filled, and all
before are empty, so jumping to base_slot_idx ensures that we don't
reach an empty slot. That's what the switch trick is all about.
>
> > if ret != 0:
>
> I assume you'd want "ret != DEFAULT_RET" instead of "ret != 0".
Yeah that's a good question - but the existing behaviour is to always
check against 0 (DEFAULT_RET is called IRC in the real code),
which does seem strange.
> So what goes in for empty slots? What about gaps in the table?
It's a NOP, but we never execute it (explained above). There are no gaps.
>> +#define __UNROLL_MACRO_LOOP_20(MACRO, ...) \
>> + __UNROLL_MACRO_LOOP_19(MACRO, __VA_ARGS__) \
>> + MACRO(19, __VA_ARGS__)
>> +
> Where does "20" come from? Why are you unrolling beyond 11?
It's just an arbitrary limit on the unrolling macro implementation, we
aren't actually unrolling beyond 11 where the macro is used (N is set
to 11).
>
> > With this use of the table and the
> > switch, it is possible to jump directly to the first used slot and execute
> > all of the slots after. This essentially makes the entry point of the table
> > dynamic. Instead, it would also be possible to start from 0 and break after
> > the final populated slot, but that would require an additional conditional
> > after each slot.
> >
> > This macro is used to generate the code for each static slot, (e.g. each
> > case statement in the previous example). This will expand into a call to
> > MACRO for each static slot defined. For example, if with again 5 slots:
> >
> > SECURITY_FOREACH_STATIC_SLOT(MACRO, x, y) ->
> >
> > MACRO(0, x, y)
> > MACRO(1, x, y)
> > MACRO(2, x, y)
> > MACRO(3, x, y)
> > MACRO(4, x, y)
> >
> > This is used in conjunction with LSM_HOOK definitions in
> > linux/lsm_hook_defs.h to execute a macro for each static slot of each LSM
> > hook.
> >
> > The patches for static calls [6] are not upstreamed yet.
> >
> > The number of available slots for each LSM hook is currently fixed at
> > 11 (the number of LSMs in the kernel). Ideally, it should automatically
> > adapt to the number of LSMs compiled into the kernel.
>
> #define SECURITY_STATIC_SLOT_COUNT ( \
> 1 + /* Capability module is always there */ \
> (IS_ENABLED(CONFIG_SECURITY_SELINUX) ? 1 : 0) + \
> (IS_ENABLED(CONFIG_SECURITY_SMACK) ? 1 : 0) + \
> ... \
> (IS_ENABLED(CONFIG_BPF_LSM) ? 1 : 0))
>
Yeah, that's exactly what we need but it needs to be expanded to an
integer literal at preprocessor time, those +s won't work :(
> > If there’s no practical way to implement such automatic adaptation, an
> > option instead would be to remove the panic call by falling-back to the old
> > linked-list mechanism, which is still present anyway (see below).
> >
> > A few special cases of LSM don't use the macro call_[int/void]_hook but
> > have their own calling logic. The linked-lists are kept as a possible slow
> > path fallback for them.
>
> Unfortunately, the stacking effort is increasing the number of hooks
> that will require special handling. security_secid_to_secctx() is one
> example.
>
> >
> > Before:
> >
> > https://gist.githubusercontent.com/PaulRenauld/fe3ee7b51121556e03c181432c8b3dd5/raw/62437b1416829ca0e8a0ed9101530bc90fd42d69/lsm-performance.png
> >
> > After:
> >
> > https://gist.githubusercontent.com/PaulRenauld/fe3ee7b51121556e03c181432c8b3dd5/raw/00e414b73e0c38c2eae8f05d5363a745179ba285/faster-lsm-results.png
> >
> > With this implementation, any overhead of the indirect call in the LSM
> > framework is completely mitigated (performance results: [7]). This
> > facilitates the adoption of "bpf" LSM on production machines and also
> > benefits all other LSMs.
>
> Your numbers for a system with BPF are encouraging. What do the numbers
> look like for a system with SELinux, Smack or AppArmor?
Yeah that's a good question. As I said in the sibling thread the
motivating example is very lightweight LSM callbacks in very hot
codepaths, but I'll get some broader data too and report back.