This is rather subjective feeling.- such an approach requires adding of additional argument to manyhmm? last time I checked OpenVZ was quite bloated
functions (e.g. Eric's patch for networking is 1.5 bigger than openvz).
compared to Linux-VServer, and Eric's network part
isn't even there yet ...
yes. In _most_ cases does.- it can't efficiently compile to the same not virtualized kernel,while OpenVZ does?
which can be undesired for embedded linux.
Why do you think they are strange!? Is it strange that networking exports it's sysctls and statictics via proc?- fine grained namespaces are actually an obfuscation, since kernelI think a lot of _strange_ interconnects there could
subsystems are tightly interconnected. e.g. network -> sysctl -> proc,
mqueues -> netlink, ipc -> fs and most often can be used only as a
whole container.
use some cleanup, and after that the interconenctions
would be very small
- you need to track dependencies between namespaces (e.g. NAT requires conntracks, IPC requires FS etc.). this should be handled, otherwise one container being able to create nested container will be able to make oops.- it involves a bit more complicated procedure of a containerI don't understand this argument ...
create/enter which requires exec or something like this, since there is no effective container which could be simply triggered.
1.2. containers (OpenVZ.org/linux-vserver.org)
please do not generalize here, Linux-VServer does not use a single container structure as you might
think ...
do you have support for it in tools?Container solution was discussed before, and actually it is also
namespace solution, but as a whole total namespace, with a single kernel structure describing it.
that might be true for OpenVZ, but it is not for
Linux-VServer, as we have structures for network
and process contexts as well as different ones for
disk limits
this is exactly what it does.Every task has two cotnainer pointers: container and effective
container. The later is used to temporarily switch to other contexts, e.g. when handling IRQs, TCP/IP etc.
this doesn't look very cool to me, as IRQs should
be handled in the host context and TCP/IP in the
proper network space ...
easily.Benefits:how does that handle the issues you described with
- clear logical bounded container, it is clear when container is alive and when not.
sockets in wait state which have very long timeouts?
have you analyzed that before thinking about inlining?- it doesn't introduce additional args for most functions,a single additional arg here and there won't hurt,
no additional stack usage.
and I'm pretty sure most of them will be in inlined
code, where it doesn't really matter
this is the question I want to get from Linus/Andrew.- it compiles to old good kernel when virtualization if off, so doesn't disturb other configurations.the question here is, do we really want to turn it off at all? IMHO the design and implementation should be sufficiently good so that it does neither
impose unnecessary overhead nor change the default
behaviour ...
Such vars can automatically be defined to something like "(econtainer()->virtualized_variable)".- Eric brought an interesting idea about introducing interface likehow is that an advantage of the container approach?
DEFINE_CPU_VAR(), which could potentially allow to create virtualized
variables automagically and access them via econtainer().
:))))- mature working code exists which is used in production for years, so first working version can be done much quickerfrom the OpenVZ/Virtuozzo(tm) page:
Specific benefits of Virtuozzo(tm) compared to OpenVZ can be found below:
- Higher VPS density. Virtuozzo(tm) provides efficient memory and file sharing mechanisms enabling higher VPS density and better performance of VPSs.
- Improved Stability, Scalability, and Performance. Virtuozzo(tm)
is designed to run 24×7 environments with production workloads
on hosts with up-to 32 CPUs.
so I conclude, OpenVZ does not contain the code which
provides all this ..