Using devices in Containers (was: [lxc-devel] device namespaces)

From: Eric W. Biederman
Date: Wed Sep 24 2014 - 13:43:40 EST

Serge Hallyn <serge.hallyn@xxxxxxxxxx> writes:

> Isolation is provided by the devices cgroup. You want something more
> than isolation.
> Quoting riya khanna (riyakhanna1983@xxxxxxxxx):
>> My use case for having device namespaces is device isolation. Isn't what
>> namespaces are there for (as I understand)?

Namespaces fundamentally provide for using the same ``global'' name
in different contexts. This allows them to be used for isolation
and process migration (because you can take the same name from
machine to machine).

Unless someone cares about device numbers at a namespace level
the work is done.

The mount namespace provides exsits to deal with file names.
The devices cgroup will limit which devices you can access (although
I can't ever imagine a case where the mout namespace would be

>> Not everything should be
>> accessible (or even visible) from a container all the time (we have seen
>> people come up with different use cases for this). However, bind-mounting
>> takes away this flexibility.

I don't see how. If they are mounts that propogate into the container
and are controlled from outside you can do whatever you want. (I am
imagining device by device bind mounts here). It should be trivial
to have a a directory tree that propogates into a container and works.

>> I agree that assigning fixed device numbers is
>> clearly not a long-term solution. Emulation for safe and flexible
>> multiplexing, like you suggested either using CUSE/FUSE or something like
>> devpts, is what I'm exploring.

Is the problem you actually care about multiplexing devices?

I think there is quite a bit of room to talk about how to safely
and effectively use devices in containers. So let's make that the
discussion. No one actually wants device number namespaces and talking
about them only muddies the watters.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at