Re: [00/41] Large Blocksize Support V7 (adds memmap support)

From: Nick Piggin
Date: Wed Sep 12 2007 - 14:28:09 EST


On Wednesday 12 September 2007 10:00, Christoph Lameter wrote:
> On Tue, 11 Sep 2007, Nick Piggin wrote:
> > Yes. I think we differ on our interpretations of "okay". In my
> > interpretation, it is not OK to use this patch as a way to solve VM or FS
> > or IO scalability issues, especially not while the alternative approaches
> > that do _not_ have these problems have not been adequately compared or
> > argued against.
>
> We never talked about not being able to solve scalability issues with this
> patchset. The alternate approaches were discussed at the VM MiniSummit and
> at the VM/FS meeting. You organized the VM/FS summit. I know you were
> there and were arguing for your approach. That was not sufficient?

I will still argue that my approach is the better technical solution for large
block support than yours, I don't think we made progress on that. And I'm
quite sure we agreed at the VM summit not to rely on your patches for
VM or IO scalability.


> > So even after all this time you do not understand what the fundamental
> > problem is with anti-frag and yet you are happy to waste both our time
> > in endless flamewars telling me how wrong I am about it.
>
> We obviously have discussed this before and the only point of asking this
> question by you seems to be to have me repeat the whole line argument
> again?

But you just showed in two emails that you don't understand what the
problem is. To reiterate: lumpy reclaim does *not* invalidate my formulae;
and antifrag does *not* isolate the issue.


> > Forgive me if I'm starting to be rude, Christoph. This is really
> > irritating.
>
> Sorry but I have had too much exposure to philosophy. Talk about absolutes
> like guarantees (that do not exist even for order 0 allocs) and unlikely
> memory fragmentation scenarios to show that something does not work seems
> to be getting into some metaphysical realm where there is no data anymore
> to draw any firm conclusions.

Some problems can not be solved easily or at all within reasonable
constraints. I have no problems with that. And it is a valid stance to
take on the fragmentation issue, and the hash collision problem, etc.

But what do you say about viable alternatives that do not have to
worry about these "unlikely scenarios", full stop? So, why should we
not use fsblock for higher order page support?

Last times this came up, you dismissed the fsblock approach because it
adds another layer of complexity (even though it is simpler than the
buffer layer it replaces); and also had some other strange objection like
you thought it provides higher order pages or something.

And as far as the problems I bring up with your approach, you say they
shouldn't be likely, don't really matter, can be fixed as we go along, or
want me to show they are a problem in a real world situation!! :P


> Software reliability is inherent probabilistic otherwise we would not have
> things like CRC sums and SHA1 algorithms. Its just a matter of reducing
> the failure rate sufficiently. The failure rate for lower order
> allocations (0-3) seems to have been significantly reduced in 2.6.23
> through lumpy reclaim.
>
> If antifrag measures are not successful (likely for 2M allocs) then other
> methods (like the large page pools that you skipped when reading my post)
> will need to be used.

I didn't skip that. We have large page pools today. How does that give
first class of support to those allocations if you have to have memory
reserves?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/