Re: dirty balancing deadlock
From: Miklos Szeredi
Date: Sun Feb 18 2007 - 17:54:55 EST
> Andrew Morton wrote:
> > On Sun, 18 Feb 2007 19:28:18 +0100 Miklos Szeredi <miklos@xxxxxxxxxx> wrote:
> >
> >> I was testing the new fuse shared writable mmap support, and finding
> >> that bash-shared-mapping deadlocks (which isn't so strange ;). What
> >> is more strange is that this is not an OOM situation at all, with
> >> plenty of free and cached pages.
> >>
> >> A little more investigation shows that a similar deadlock happens
> >> reliably with bash-shared-mapping on a loopback mount, even if only
> >> half the total memory is used.
> >>
> >> The cause is slightly different in the two cases:
> >>
> >> - loopback mount: allocation by the underlying filesystem is stalled
> >> on throttle_vm_writeout()
> >>
> >> - fuse-loop: page dirtying on the underlying filesystem is stalled on
> >> balance_dirty_pages()
> >>
> >> In both cases the underlying fs is totally innocent, with no
> >> dirty/writback pages, yet it's waiting for the global dirty+writeback
> >> to go below the threshold, which obviously won't, until the
> >> allocation/dirtying succeeds.
> >>
> >> I'm not quite sure what the solution is, and asking for thoughts.
> >
> > But.... these things don't just throttle. They also perform large amounts
> > of writeback, which causes the dirty levels to subside.
> >
> >>From your description it appears that this writeback isn't happening, or
> > isn't working. How come?
>
> Is the fuse daemon trying to do writeback to itself, perhaps?
>
> That is, trying to write out data to the FUSE filesystem, for which
> it is also the server.
No. It's trying to write out data to a different filesystem.
Trying to write out data to itself very obviously deadlocks, but that
doesn't affect anything beside the stupid filesystem itself, and there
are mechanisms for aborting such a situation (forced umount, abort
through fuse-control filesystem).
Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/