Re: Encrypted Filesystem

From: Andi Kleen
Date: Tue Jan 27 2004 - 11:15:39 EST


Michael A Halcrow <mahalcro@xxxxxxxxxx> writes:

Hi,

First thanks for attempting this work. The state of art of encrypted
file systems in Linux currently is really not satisfying and can need
some improvements.

Here are some thoughts about it:

I wrote my own crypto loop implementation at some point because I
wasn't satisfied with the existing ones for my own needs. From that
experience I think first going over crypto loop is not a good idea
because a block device is not a good unit of encryption.
Better use a stacking file system or somesuch. Technically this
has the advantage that you don't need to cache the data twice (crypto
loop keeps both unencrypted and crypted data in the page cache)
and the disadvantage that you need to encrypt on every write instead of
every cache flush (that's quite reasonable with fast encryption algorithms)

The biggest shortcomming in crypto loop is that you cannot change the
password easily. Doing so would require reencryption of the whole
volume and it is hard to do so in a crash safe way (or you risk loss
of the volume when the machine crashes during reencryption) Another
problem is that using the user key makes it easy to use dictionary
attacks using known plain text. For example the first block on a ext2
file system is always zero and can be easily used to do a dictionary
attack based on a weak user password. The standard crypto loop uses
fixed IVs too which do not help against this.

I fixed this for my private implementation by using a different
encrypted keyfile and a session key for the actual implementation. And
the IV for each block is generated by a hash with another
secret. Disadvantage is of course that you have to store the keyfile
somewhere (with loop it is not practical to put any metadata into the
encrypted volume) and not lose it. With an stacking file system that
would be easier because you can just store it directly into underlying
fs.

One problem with this approach for a stacking file system is that you
need a new session key for each file if you encrypt them separately.
I'm not quite sure /dev/random can supply that many good secrets.
On a normal user system there is plenty of entropy from the keyboard
and mouse, but on a headless server it can be quite difficult.
For a loop device you only need the session key once so it's not a big
issue. For any session keys you may need to store secret state separately and
use a different method to generate them based on the state (e.g. using
a counter and a secure hash)

Another problem with stacking file systems is that they're not really
tried in the Linux VFS and there may be problems with it. Still they're
probably solvable. Please when you encounter any problems report them
to the Linux developers to fix them cleanly, not work around them in ugly ways
in your own code.

Not directly related to the file system, but in a bigger picture the
biggest problem with using cryptography regularly in Linux is that
there is no nice way for users to prevent pages from being swapped out
to disk. Always when you decrypt a file you risk it ending up
unencrypted on the swap partition. This means even when your file
system encrypts great you still risk your data when reading it.

While it would be possible to encrypt swap too I'm not sure this is a
good idea: e.g. it requires global key management, which is probably
bad. And it could cause performance problems. One idea that has been
around forever for it was to give each uid a global quota for
mlock()ed pages. This would at least allow to keep the keys secure.
One a bit more far fetched idea I was thinking about was to make the
mlock quota quite big (let's say for the currently logged in X user
1/2 of memory or less for a multi user system) and add a "crypto
tainted" flag to the processes. This means every process that accesses
the crypto file system is tainted this way and is prevented from
writing out any dirty pages up to the quota. Other swapping that
doesn't involve writing dirty pages like discarding of read only
program text is fine. When you exceed the quota you could warn
the user or prevent the process from growing in a more security
paranoid setting. This is not 100% fool proof - it could somehow
leak the secret data to other not tained processes, but would probably
still do much better than the current "I don't care" state.

Back from the far fetched ideas. I think using a stacked file system
is the best way to go. Loop is just too dumb. NFS loopback or FUSE
are too slow. The biggest challenge is probably good key management
(both session and user keys). The Linux interface will be probably
simple compared to that.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/