Re: file effective and process inheritable mask

Y2K (y2k@y2ker.com)
Fri, 23 Apr 1999 22:41:33 -0700 (PDT)


On Sat, 24 Apr 1999, Albert D. Cahalan wrote:
> Y2K writes:
> > On Fri, 23 Apr 1999, Albert D. Cahalan wrote:
> >> The two most important extensions IMHO are:
> >> 2. The ability to mark an executable with the capabilities that were
> >> known to the setup tool. The kernel can use default values for any
> >> bits that are not listed. This lets kernel developers add new bits
> >> for traditionally unprivileged operations and lets them split old
> >> bits into more fine-grained sets of bits.
> > How about CAP_LINUX_RESERVE_NORMAL_0 through CAP_LINUX_RESERVE_NORMAL_15
> > being defined for future caps which are new caps that presently availible
> > to everyone even non-root ones.
> How many is enough? I do not think we should reserve a particular number
> of slots. Also, that only covers the first problem.
Well right now total number of caps is less than 32 so it would be good to
start out with. As you define more caps you begin to reserve others in
their place. You just don't try growing them too fast.
> Let's imagine that it is 2003, Linux 3.0 is out, and everybody is using
> capabilities. Someone decides to split CAP_DAC_READ_SEARCH into less
> powerful components CAP_DAC_SEARCH and CAP_DAC_READ.
You have a compatible period where the checks are
capable(CAP_DAC_READ_SEARCH)||capable(CAP_DAC_foo) where foo is either
READ or SEARCH
or if you needed both
capable(CAP_DAC_READ_SEARCH)||(capable(CAP_DAC_READ)&&capable(CAP_DAC_SEARCH))
After enough time then you make a CAP_OBSOLETE .
I guess thats also why capability.h has _LINUX_CAPABILITY_VERSION defined,
and elf header stuff has room for version info. caps are might be very
fluid. Maybe you need libc support to translate ascii string cap
descriptions to bitmasks? compile_caps ???
At the very least being able to ask the kernel a few questions would be
useful.
> Do you change the meaning of an existing bit, or do you end up with
> all three capabilities? If you have all three, how do they interact?
> If you change the meaning of an existing bit, stuff will break.
Yes portability concerns between different versions of un*x including
future ones are important. If you haven't moved to autoconf yet, now is
the time. Maybe you need a set of macros to produce capability bitmasks at
compile time.
> If there is a "known set" fK associated with the file, it is safe to
> redefine the old bit. New kernels will know that the executable does
> not define the new bit, so they can set it equal to the old bit.
I think a macro facility might help.
Define a few fatter profiles like power, normal, net_power,net_normal,
file_power, file_normal, reserved_power, reserved_normal, power_admin
,usr then you can do a macro thing at compile time like
then you can choose normal+net_power-FEW_NET_THINGS_I_KNOW_ABOUT
Anyway something to think about. Just musing about different things.
> >> After those two, I think it would be nice to allow configurations that
> >> are halfway between traditional behavior and the draft. One could let
> One (or both) of us doesn't understand the other I think.
> What I propose could work like this:
> 1. The kernel calculates the draft capabilities. (call them d_pP, etc.)
> 2. The kernel calculates old-style values too. (call them o_pP, etc.)
> 3. The kernel has /proc-configurable masks. (call them p_pP, etc.)
> 4. This is done:
>
> pP = (o_pP & p_pP) | d_pP;
Yes I what to strengthen the formula not weaken it.
pP'= (fP & x) | (fI & pI & pP)
for proper files x can be ~0 otherwise 0 .

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/