On Mon, 2008-04-14 at 21:59 -0700, Crispin Cowan wrote:Of *course* AppArmor protects the integrity of /etc/shadow, and unauthorized parties are not permitted to feed data into that file unless explicit access is granted. The difference is in how it is done:
Stephen Smalley wrote:If you aren't concerned with unauthorized data flow into
On Sun, 2008-04-13 at 19:05 -0700, Crispin Cowan wrote:I understand how the confidentiality of secrets like the contents of /etc/shadow and your .ssh files is important, but how can the integrity of these data objects be important? Back them up if you care ...Things that pathname-based access control is good at:I think some might argue that the integrity of the data in /etc/shadow
* *System Integrity:* Many of the vital components of a UNIX system
are stored in files with Well Known Names such as /etc/shadow,
/var/www/htdocs/index.html and /home/crispin/.ssh/known_hosts. The
contents of the actual data blocks is less important than the
integrity of what some random process gets when it asks for these
resources by name. Preserving the integrity of what responds to
the Well Known Name is thus easier if you restrict access based on
the name.
and your .ssh files is very important, not just their names.
your /etc/shadow and .ssh files, then I think we'll just have to stop
right there in our discussion, as we evidently don't have a common point
of reference in what we mean by "security". Personally I'd be troubled
if an unauthorized entity can ultimately feed data to such files, even
if indirectly by tricking a privileged process into conveying the data
to its ultimate target, a not-so-uncommon pattern.
Ok. I view the above as a marginal nice-to-have property that I don't actually care much about, because it is a large amount of work to manage for a small amount of integrity to gain. People who want that should use some kind of information flow controlling policy system like SELinux.In some cases, you can simply prohibit a security-relevant process fromAndYou've argued that before, and I've never been convinced. Rather, it looked a lot like a stretched definition trying really hard to turn integrity into an information flow problem.The most information flow that I will buy in the integrity problem is taint analysis of software inputs; that software should validate inputs before acting on it.
ultimately data integrity requires information flow control to preserve.
taking untrustworthy inputs. Like blocking privileged processes from
following untrustworthy symlinks to counter malicious symlink attacks or
from reading any files other than ones created by the admin. In other
cases, you need to allow untrustworthy inputs to ultimately flow to the
security-relevant process, but you want to force them through some kind
of validation as you say above, which you can do by enforcing a
processing pipeline that forces the data to go through a subsystem that
performs validation and/or sanitization before it ever reaches the
security-relevant process. That's how integrity is an information flow
problem. And this isn't a new idea, btw, it is one that was expressed
long ago in the Biba model, a variant of which happens to be implemented
and used in Vista, and is more usefully achievable via Type Enforcement
since there we can control the processing flow precisely and bind the
validation/sanitization subsystem to specific code.
You don't have to consider any such thing when you are *only* concerned with confining the impact of the process running on the NFS client.Except that you have to consider what is happening on the server too,- anything further is misleading as theI don't understand this issue. The enforcement here is t contain the program executing on the NFS *client* to permit it to only mangle the parts of the NFS mount that you want it to mangle. That the server won't enforce anything for you is irrelevant when the threat is the confined application.
server or device won't ensure any finer grained separation for us.
given that the files are visible to local processes there, and what
happens on all of the clients.
It isn't a strawman argument. I know that AppArmor doesn't try to applyConversely, at the end of the day you can't say much about what your SELinux policy enforces, because you can't understand it :)
pathnames to non-files. Which leads it down the first case of
inconsistent" control - at the end of the day in looking at an AppArmor
policy you can't say anything about how information may have ultimately
flowed in violation of your confidentiality or integrity goals because
you have a lossy abstraction. Whereas we can convey the same uniform
control over files, network IPC, local IPC, etc and make such
statements.
So associating a security property with a name is ok if you do it statically at some arbitrary point in time, but not if you consider it at the time of access? WtF? Isn't that a gigantic race condition?Making that inference when a file is first installed (as from rpm) is- forcing policy to be written in terms of individual objects andNote also that the SELinux restorecon mechanism also makes the assumption that path names correspond to security properties: in fact, that is precisely its function, to take a path name and use it to apply a security property (a label). Naturally I have no objection to inferring a security property from the path name :) I just object to the racy way that restorecon does it, combined with the complaint that AppArmor is wrong for doing exactly the same thing in a different way.
filesystem layout rather than security properties.
reasonable. restorecon (the utility) is for the filesystem to the
initial install-time labeling state, which is why it uses the same
mapping. Making that inference on every access in complete ignorance of
the actual runtime state of the system is what I object to.