Re: kdbus: to merge or not to merge?

From: David Herrmann
Date: Tue Aug 04 2015 - 04:58:12 EST


Hi

On Tue, Aug 4, 2015 at 1:02 AM, Andy Lutomirski <luto@xxxxxxxxxx> wrote:
> I got Fedora
> Rawhide working under kdbus (thanks, everyone!), and I ran this little
> program:
>
> #include <systemd/sd-bus.h>
> #include <err.h>
>
> int main(int argc, char *argv[])
> {
> while (1) {
> sd_bus *bus;
> if (sd_bus_open_system(&bus) < 0) {
> /* warn("sd_bus_open_system"); */
> continue;
> }
> sd_bus_close(bus);

You lack a call to sd_bus_unref() here. Without it, your loop contains:

while (1)
malloc(1024);

This simple malloc-loop already hogs your system. If I add the
required call to _unref(), your tool runs smoothly on my machine.

> }
> }
>
> under both userspace dbus and under kdbus. Userspace dbus burns some
> CPU -- no big deal. I expected kdbus to fail to scale and burn a
> disproportionate amount of CPU (because I don't see how it /can/
> scale). Instead it fell over completely. I didn't bother debugging
> it, but offhand I'd guess that the system OOMed and didn't come back.

I cannot see the relation to kdbus.

> On very brief inspection, Rawhide seems to have a lot of kdbus
> connections with 16MiB of mapped tmpfs stuff each. (53 of them
> mapped, and I don't know how many exist with tmpfs backing but aren't
> mapped. Presumably the number only goes up as the degree of reliance
> on the userspace proxy goes down.

What does this have to do with the proxy? Why would resource
consumption go *up* as the proxy users decline? Please elaborate.

> I don't know of any deployed
> systems that solve it by broadcasting the lifetime of everything to
> everyone and relying on those broadcasts going through, though.

Luckily, kdbus does not do this.

Thanks
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/