Re: [PATCH v4 00/14] Add kdbus implementation

From: Andy Lutomirski
Date: Tue Mar 31 2015 - 09:59:20 EST


On Mon, Mar 30, 2015 at 9:56 AM, David Herrmann <dh.herrmann@xxxxxxxxx> wrote:
> Hi
>
> On Wed, Mar 25, 2015 at 7:12 PM, Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
>> On Wed, Mar 25, 2015 at 10:29 AM, David Herrmann <dh.herrmann@xxxxxxxxx> wrote:
> [...]
>>>> I could be wrong about the lack of use cases. If so, please enlighten me.
>>>
>>> We have several dbus APIs that allow clients to register as a special
>>> handler/controller/etc. (eg., see systemd-logind TakeControl()). The
>>> API provider checks the privileges of a client on registration and
>>> then just tracks the client ID. This way, the client can be privileged
>>> when asking for special access, then drop privileges and still use the
>>> interface. You cannot re-connect in between, as the API provider
>>> tracks your bus ID. Without message-metadata, all your (other) calls
>>> on this bus would always be treated as privileged. We *really* want to
>>> avoid this.
>>
>> Connect twice?
>
> The remote peer might cache your connection ID to track you. You have
> to use the same connection to talk to that peer, in this given
> scenario. That's how D-Bus1 is used today, and we have to follow the
> semantics.

That's yet another reason that you really ought to disconnect and
reconnect after a privilege change -- the remote peer might remember
you.

The "might" will be a big problem. Users of kdbus can't rely on any
particular concept of privilege because you have too many of them.

>
>> You *already* have to reconnect or connect twice because you have
>> per-connection metadata. That's part of my problem with this scheme
>> -- you support *both styles*, which seems like it'll give you most of
>> the downsides of both without the upsides.
>
> Not necessarily. Connection metadata describes the state at the time
> you connected to the bus. If someone ask for this information, they
> will get exactly that. In this model, you cannot drop privileges, if
> you need to be privileged during setup.
> If someone asks for per-message metadata, they better ought not ask
> for per-connection creds. It's not the information they're looking
> for, so it will not match the data that at the time the message was
> sent.
>
> There is no immediate need to make both match. For security decisions,
> we mandate per-message creds. Per-connection creds are for
> backwards-compatibility to dbus1 and for passive introspection of bus
> connections.

Backwards compatibility doesn't magically exempt security
considerations. If new code is insecure when talking to a legacy
service, it's still insecure.

[...]

>> Again, you seem to be arguing that per-connection metadata is bad, but
>> you still have an implementation of per-connection metadata, so you
>> still have all these problems.
>
> I don't see why we get the problems of per-connection metadata. Just
> because you _can_ use it, doesn't mean you should use it for all
> imaginable use-cases. The same goes for reading information from
> /proc. There are valid use-cases to do so, but also a lot of cases
> where it will not provide the information you want.
>

Then you'll need to document really carefully which metadata is used
for which service. This actually seems impossible to do, since some
services will exist in legacy and kdbus forms.

>> I'm actually okay with per-message metadata in principle, but I'd like
>> to see evidence (with numbers, please) that a send+recv of per-message
>> metadata is *not* significantly slower than a recv of already-captured
>> per-connection metadata. If this is in fact the case, then maybe you
>> should trash per-connection metadata instead and the legacy
>> compatibility code can figure out a way to deal with it. IMO that
>> would be a pretty nice outcome, since you would never have to worry
>> whether your connection to the bus is inadvertantly privileged.
>
> Per-message metadata makes SEND about 25% slower, if you transmit the
> full set of all possible information. Just 3% if you only use
> PIDs+UIDs. The expensive metadata is cgroup-path and exe-path.
> If a service needs that information, however, and if that information
> is not guaranteed to be up-to-date, the service _will_ go and look it
> up in /proc or somewhere else, which is certainly a whole lot more
> expensive than the code in kdbus.

Can you give actual numbers, in ns or cycles, of how much overhead
metadata adds?

>
> In general, there seems to be a number of misconception in this thread
> about what kdbus is supposed to be. We're not inventing something new
> here with a clean slate, but we're moving parts of an existing
> implementation that has tons of users into the kernel, in order to fix
> issues that cannot be fixed otherwise in userspace (most notably, the
> race gaps that exist when retrieving per-message metadata). Therefore,
> we have to keep existing semantics stable, otherwise the exercise is
> somewhat pointless.

IOW you're taking something that you dislike aspects of and shoving
most of it in the kernel. That guarantees us an API in the kernel
that even the creators don't really like. This is, IMO, very
unfortunate.

Have you considered porting the kdbus per-message metadata mechanism to UDS?

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/