Re: [RFC PATCH 00/30] Kernel NET policy
From: Hannes Frederic Sowa
Date: Mon Jul 18 2016 - 17:51:22 EST
Hello,
On Mon, Jul 18, 2016, at 21:43, Andi Kleen wrote:
> > I wonder if this can be attacked from a different angle. What would be
> > missing to add support for this in user space? The first possibility
> > that came to my mind is to just multiplex those hints in the kernel.
>
> "just" is the handwaving part here -- you're proposing a micro kernel
> approach where part of the multiplexing job that the kernel is doing
> is farmed out to a message passing user space component.
>
> I suspect this would be far more complicated to get right and
> perform well than a straight forward monolithic kernel subsystem --
> which is traditionally how Linux has approached things.
At the same time having any kind of policy in the kernel was also always
avoided.
> The daemon would always need to work with out of date state
> compared to the latest, because it cannot do any locking with the
> kernel state. So you end up with a complex distributed system with
> multiple
> agents "fighting" with each other, and the tuning agent
> never being able to keep up with the actual work.
But you don't want to have the tuning agents in the fast path? If you
really try to synchronously update all queue mappings/irqs during socket
creation or connect time this would add rtnl lock to basically socket
creation, as drivers require that. This would slow down basic socket
operations a lot and synchronize them with the management interface.
Even dst_entries are not synchronously updated anymore nowadays as that
would require too much locking overhead in the kernel.
> Also of course it would be fundamentally less efficient than
> kernel code doing that, just because of the additional context
> switches needed.
Synchronizing or configuring any kind of queues already requires
rtnl_mutex. I didn't test it but acquiring rtnl mutex in inet_recvmsg is
unlikely to fly performance wise and might even be very dangerous under
DoS attacks (like I see in 24/30).
Bye,
Hannes