Re: [GIT PULL] kdbus for 4.1-rc1
From: Johannes Stezenbach
Date: Wed Apr 22 2015 - 09:10:37 EST
On Tue, Apr 21, 2015 at 09:37:44AM -0400, Havoc Pennington wrote:
>
> I think the pressure to use dbus happens for several reasons, if you
> use a side channel some example complaints people have are:
>
> * you have to reinvent any dbus solutions for security policy,
> containerization, debugging, introspection, etc.
> * you're now writing custom socket code instead of using the
> high-level dbus API
> * the side channel loses message ordering with respect to dbus messages
> * your app code is kind of "infected" structurally by a performance
> optimization concern
> * you have to decide in advance which messages are "too big" or "too
> numerous" - which may not be obvious, think of a cut-and-paste API,
> where usually it's a paragraph of text but it could in theory be a
> giant image
> * you can't do big/numerous multicast, side channel only solves the unicast
>
> There's no doubt that it's possible to use a side channel - just as it
> was possible to construct an ad hoc IPC system prior to dbus - but the
> overall OS (counting both kernel and userspace) perhaps becomes more
> complex as a result, compared to having one model that supports more
> cases.
>
> One way to frame it: the low performance makes dbus into a relatively
> leaky abstraction where there's this surprise lurking for app
> developers that they might have to roll their own IPC on the side or
> special-case some of their messages.
>
> it's not the end of the world, it's just that it would have a certain
> amount of overall simplicity (counting userspace+kernel together) if
> one solution covered almost all use-cases in this "process-to-process
> comms on local system" scenario, instead of 90% of use-cases but too
> slow for the last 10%. The simplicity here isn't only for app
> developers, it's also for anyone doing debugging or administration or
> system integration, where they can deal with one system _or_ one
> system plus various ad-hoc side channels.
Clearly it is not useful to put the burden on the app developers.
However, I do not (yet?) understand why direct links couldn't be added
to the DBus daemon and library and be used fairly transparently
by apps:
- allow peers to announce "I allow direct connect"
(we don't want to many sockets/connections, just e.g.
gconf, polkit, ... where it matters for performance)
- when clients do an RPC call, check if the server allows direct
connect and then do it (via DBus daemon as helper)
- obviously the clients would maintain the connection to the
DBus daemon for the remaining purposes
Of course. that means the DBus daemon cannot enforce the policy
anymore, you could use the same database but the code which uses
it would have to be moved into the dbus library.
I must admit that I do not understand the importance of
message ordering between some RPC call and other messages
via the DBus daemon since the app can do the RPC call at any time.
Wrt big/numerous multicast, you are right that this wouldn't
solve it, but doesn't seem the problem we need to address?
At least I've not seen any performance measurements which
would indicate it.
That all said, I'm not opposed at all to adding kernel
infrastructure for the benefit of DBus. However, I am
quite disappointed both by the monolithic, single-purpose
design of the kdbus API, and especially by the way it
is presented to the kernel community. What I mean by the
latter is that we get an amount kernel code which you cannot
understand unless you also understand the userspace
DBus *and* the actual usage of DBus in desktop systems,
and this is accompanied with statements along the line
of "many smart people worked on this for two years and
everyone agreed". I.e., we only get the solution
but not the bckground knowledge to understand and judge
the solution for ourselves.
What I had appreciated instead:
- performance meansurement results which demontrate
the problem and the actual DBus use in practice for
various message type / use cases
- an account of the attempts that have been made to
fix it and the reasons why they failed, so we can
understand how the current design has evolved
The latter may be asking a lot, but IPC is a core OS feature
which comes right after CPU and memory resource management
and basic I/O. The basic IPC APIs are fairly simple, the
socket API is already quite complex, and kdbus goes to
another level of complexity and cruftiness, and with all
the words which have been written in this thread there is
still not an adequate justification for it.
For example, I do understand the policy database has to be
in the kernel as it is checked for every message, but I
don't see why the name service needs to be in the kernel.
I suspect (lacking performance figures) that name ownership
changes are relatively rare, and lookups, too (ISTR you mentioned
clients cache the result).
For the base messaging and policy filtering I don't see why
this has to be one monolithic API and not split in a
fairly simple, general purpose messaging API, and a completely
seperate API for configuring the filters and attaching them
to the bus.
Johannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/