doesn't look so good for me, mainly becaus of the
additional per page data and per page processing
on 4GB memory, with 100 guests, 50% shared for each
guest, this basically means ~1mio pages, 500k shared
and 1500k x sizeof(page_container) entries, which
roughly boils down to ~25MB of wasted memory ...
increase the amount of shared pages and it starts
getting worse, but maybe I'm missing something here
> We need to decide whether we want to do per-container memory
> limitation via these data structures, or whether we do it via a
> physical scan of some software zone, possibly based on Mel's patches.
why not do simple page accounting (as done currently
in Linux) and use that for the limits, without
keeping the reference from container to page?
best,
Herbert