Re: Disabling in-memory write cache for x86-64 in Linux II

From: Linus Torvalds
Date: Tue Oct 29 2013 - 17:33:59 EST


On Tue, Oct 29, 2013 at 1:57 PM, Jan Kara <jack@xxxxxxx> wrote:
> On Fri 25-10-13 10:32:16, Linus Torvalds wrote:
>>
>> It definitely doesn't work. I can trivially reproduce problems by just
>> having a cheap (==slow) USB key with an ext3 filesystem, and going a
>> git clone to it. The end result is not pretty, and that's actually not
>> even a huge amount of data.
>
> I'll try to reproduce this tomorrow so that I can have a look where
> exactly are we stuck. But in last few releases problems like this were
> caused by problems in reclaim which got fed up by seeing lots of dirty
> / under writeback pages and ended up stuck waiting for IO to finish. Mel
> has been tweaking the logic here and there but maybe it haven't got fixed
> completely. Mel, do you know about any outstanding issues?

I'm not sure this has ever worked, and in the last few years the
common desktop memory size has continued to grow.

For servers and "serious" desktops, having tons of dirty data doesn't
tend to be as much of a problem, because those environments are pretty
much defined by also having fairly good IO subsystems, and people
seldom use crappy USB devices for more than doing things like reading
pictures off them etc. And you'd not even see the problem under any
such load.

But it's actually really easy to reproduce by just taking your average
USB key and trying to write to it. I just did it with a random ISO
image, and it's _painful_. And it's not that it's painful for doing
most other things in the background, but if you just happen to run
anything that does "sync" (and it happens in scripts), the thing just
comes to a screeching halt. For minutes.

Same obviously goes with trying to eject/unmount the media etc.

We've had this problem before with the whole "ratio of dirty memory"
thing. It was a mistake. It made sense (and came from) back in the
days when people had 16MB or 32MB of RAM, and the concept of "let's
limit dirty memory to x% of that" was actually fairly reasonable. But
that "x%" doesn't make much sense any more. x% of 16GB (which is quite
the reasonable amount of memory for any modern desktop) is a huge
thing, and in the meantime the performance of disks have gone up a lot
(largely thanks to SSD's), but the *minimum* performance of disks
hasn't really improved all that much (largely thanks to USB ;).

So how about we just admit that the whole "ratio" thing was a big
mistake, and tell people that if they want to set a dirty limit, they
should do so in bytes? Which we already really do, but we default to
that ratio nevertheless. Which is why I'd suggest we just say "the
ratio works fine up to a certain amount, and makes no sense past it".

Why not make that "the ratio works fine up to a certain amount, and
makes no sense past it" be part of the calculations. We actually
*hace* exactly that on HIGHMEM machines, where we have this
configuration option of "vm_highmem_is_dirtyable" that defaults to
off. It just doesn't trigger on nonhighmem machines (today: "64-bit").

So I would suggest that we just expose that "vm_highmem_is_dirtyable"
on 64-bit too, and just say that anything over 1GB is highmem. That
means that 32-bit and 64-bit environments will basically act the same,
and I think it makes the defaults a bit saner.

Limiting the amount of dirty memory to 100MB/200MB (for "start
background writing" and "wait synchronously" respectively) even if you
happen to have 16GB of memory sounds like a good idea. Sure, it might
make some benchmarks a bit slower, but it will at least avoid the
"wait forever" symptom. And if you really have a very studly IO
subsystem, the fact that it starts writing out earlier won't really be
a problem.

After all, there are two reasons to do delayed writes:

- temp-files may not be written out at all.

Quite frankly, if you have multi-hundred-megabyte temptiles, you've
got issues

- coalescing writes improves throughput

There are very much diminishing returns, and the big return is to
make sure that we write things out in a good order, which a 100MB
buffer should make more than possible.

so I really think that it's insane to default to 1.6GB of dirty data
before you even start writing it out if you happen to have 16GB of
memory.

And again: if your benchmark is to create a kernel tree and then
immediately delete it, and you used to do that without doing any
actual IO, then yes, the attached patch will make that go much slower.
But for that benchmark, maybe you should just set the dirty limits (in
bytes) by hand, rather than expect the default kernel values to prefer
benchmarks over sanity?

Suggested patch attached. Comments?

Linus
kernel/sysctl.c | 2 --
mm/page-writeback.c | 7 ++++++-
2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index b2f06f3c6a3f..411da56cd732 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1406,7 +1406,6 @@ static struct ctl_table vm_table[] = {
.extra1 = &zero,
},
#endif
-#ifdef CONFIG_HIGHMEM
{
.procname = "highmem_is_dirtyable",
.data = &vm_highmem_is_dirtyable,
@@ -1416,7 +1415,6 @@ static struct ctl_table vm_table[] = {
.extra1 = &zero,
.extra2 = &one,
},
-#endif
{
.procname = "scan_unevictable_pages",
.data = &scan_unevictable_pages,
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 63807583d8e8..b3bce1cd59d5 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -241,8 +241,13 @@ static unsigned long global_dirtyable_memory(void)
x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
x -= min(x, dirty_balance_reserve);

- if (!vm_highmem_is_dirtyable)
+ if (!vm_highmem_is_dirtyable) {
+ const unsigned long GB_pages = 1024*1024*1024 / PAGE_SIZE;
+
x -= highmem_dirtyable_memory(x);
+ if (x > GB_pages)
+ x = GB_pages;
+ }

return x + 1; /* Ensure that we never return 0 */
}