Hi Eric,
On Tue, Jan 09, 2018 at 09:31:27AM -0600, Eric W. Biederman wrote:
The dangerous scenario is someone exploting a buffer overflow, orFor most use cases sure. But for *some* use cases, if they can control
otherwise getting a network facing application to misbehave, and then
using these new attacks to assist in gaining privilege escalation.
of the application, you've already lost everything you had. Private keys,
clear text traffic, etc. We're precisely talking about such applications
where the userspace is as much important as the kernel, and where there's
hardly anything left to lose once the application is cracked. However, a
significant performance drop on the application definitely is a problem,
first making it weaker when facing attacks, or even failing to deal with
traffic peaks.
Googling seems to indicate that there is about one issue a year found inI agree. But in practice, we had two exploitable bugs, one in 2002
haproxy. So this is not an unrealistic concern for the case you
mention.
(overflow in the logs), and one in 2014 requiring a purposely written
config which makes no pratical sense at all. Most other vulnerabilities
involve freezes, occasionally crashes, though that's even more rare.
And even with the two above, you just have one chance to try to exploit
it, if you get your pointer wrong, it dies and you have to wait for the
admin to restart it. In practice, seeing the process die is the worst
nightmare of admins as the service simply stops. I'm not saying we don't
want to defend them, we even chroot to an empty directory and drop
privileges to mitigate such a risk. But when the intruder is in the
process it's really too late.
So unless I am seeing things wrong this is a patchset designed to dropIn fact it can be seen very differently. By making it possible for exposed
your defensense on the most vulnerable applications.
but critical applications to share some risks with the rest of the system,
we also ensure they remain strong for their initial purpose and against
the most common types of attacks. And quite frankly we're not weakening
much given the risks already involved by the process itself.
What I'm describing represents a small category of processes in only
certain environments. Some database servers will have the same issue.
Imagine a Redis server for example, which normally is very fast and
easily saturates whatever network around it. Some DNS providers may
have the same problem when dealing with hundreds of thousands to
millions of UDP packets each second (not counting attacks).
All such services are critical in themselves, but the fact that we accept
to let them share the risks with the system doesn't mean they should be
running without the protections from the occasional operations guy just
allowed to connect there to verify if logs are full and to retrive stats.
Disably protection on the most vunerable applications is not behaviorI'm not encouraging this behaviour either but right now the only option
I would encourage.
for performance critical applications (even if they are vulnerable) is
to make the whole system vulnerable.
It seems better than disabling protection systemIn fact that's what I liked with the wrapper approach, except that it
wide but only slightly. I definitely don't think this is something we
want applications disabling themselves.
had the downside of being harder to manage in terms of administration
and we'd risk to see it used everywhere by default. The arch_prctl()
approach ensures that only applications where this is relevant can do
it. In the case of haproxy, I can trivially add a config option like
"disable-page-isolation" to let the admin enable it on purpose.
But I suspect there might be some performance critical applications that
cannot be patched, and that's where the wrapper could still provide some
value.
Certainly this is something that should look at no-new-privs and ifI don't know what is "no-new-privs" and couldn't find info on it
no-new-privs is set not allow disabling this protection.
unfortunately. Do you have a link please ?
Thanks!
Willy