Re: file effective and process inheritable mask

Albert D. Cahalan (acahalan@cs.uml.edu)
Sat, 24 Apr 1999 03:46:04 -0400 (EDT)


Y2K writes:
> On Sat, 24 Apr 1999, Albert D. Cahalan wrote:
>> Y2K writes:
>>> On Fri, 23 Apr 1999, Albert D. Cahalan wrote:

>>>> The two most important extensions IMHO are:
>>>> 2. The ability to mark an executable with the capabilities that were
>>>> known to the setup tool. The kernel can use default values for any
>>>> bits that are not listed. This lets kernel developers add new bits
>>>> for traditionally unprivileged operations and lets them split old
>>>> bits into more fine-grained sets of bits.
>>>
>>> How about CAP_LINUX_RESERVE_NORMAL_0 through CAP_LINUX_RESERVE_NORMAL_15
>>> being defined for future caps which are new caps that presently availible
>>> to everyone even non-root ones.
>>
>> How many is enough? I do not think we should reserve a particular number
>> of slots. Also, that only covers the first problem.
>
> Well right now total number of caps is less than 32 so it would be good to
> start out with. As you define more caps you begin to reserve others in
> their place. You just don't try growing them too fast.

Ugh. Some people have very old executables. It is never really safe
to make random old privileged executables crash.

>> Let's imagine that it is 2003, Linux 3.0 is out, and everybody is using
>> capabilities. Someone decides to split CAP_DAC_READ_SEARCH into less
>> powerful components CAP_DAC_SEARCH and CAP_DAC_READ.
>
> You have a compatible period where the checks are
> capable(CAP_DAC_READ_SEARCH)||capable(CAP_DAC_foo) where foo is either
> READ or SEARCH

For some things, that test has to be added all over the kernel.

>> Do you change the meaning of an existing bit, or do you end up with
>> all three capabilities? If you have all three, how do they interact?
>> If you change the meaning of an existing bit, stuff will break.
>
> Yes portability concerns between different versions of un*x including
> future ones are important. If you haven't moved to autoconf yet, now is
> the time. Maybe you need a set of macros to produce capability bitmasks at
> compile time.

Compile time is less than half of the problem. Compile time is easy.

>> If there is a "known set" fK associated with the file, it is safe to
>> redefine the old bit. New kernels will know that the executable does
>> not define the new bit, so they can set it equal to the old bit.
>
> I think a macro facility might help.
> Define a few fatter profiles like power, normal, net_power,net_normal,
> file_power, file_normal, reserved_power, reserved_normal, power_admin

That won't solve the problem, but it is a nice idea anyway.

>>>> After those two, I think it would be nice to allow configurations that
>>>> are halfway between traditional behavior and the draft. One could let
>>
>> One (or both) of us doesn't understand the other I think.
>> What I propose could work like this:
>> 1. The kernel calculates the draft capabilities. (call them d_pP, etc.)
>> 2. The kernel calculates old-style values too. (call them o_pP, etc.)
>> 3. The kernel has /proc-configurable masks. (call them p_pP, etc.)
>> 4. This is done:
>>
>> pP = (o_pP & p_pP) | d_pP;
>
> Yes I what to strengthen the formula not weaken it.
> pP'= (fP & x) | (fI & pI & pP)
> for proper files x can be ~0 otherwise 0 .

Your pP' fits into my equation as d_pP. (but it is broken)
Notice that my equation is _not_ the pP equation from the draft.
The draft produces d_pP, and o_pP is something lenient.
They are combined using a configurable mask.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/