Re: [PATCH] writeback: Don't wait for completion in writeback_inodes_sb_nr
From: Curt Wohlgemuth
Date: Thu Jul 14 2011 - 12:29:42 EST
Hi Jan:
On Tue, Jul 12, 2011 at 3:37 PM, Jan Kara <jack@xxxxxxx> wrote:
> On Tue 12-07-11 06:41:32, Christoph Hellwig wrote:
>> On Tue, Jul 12, 2011 at 12:34:53PM +0200, Jan Kara wrote:
>> > > All block device inodes sit on blockdev_superblock, we got rid of inodes
>> > > without a superblock long time ago.
>> > Sure, we can easily iterate also blockdev_superblock. What I meant is
>> > that blockdev_superblock will need a special handling since we otherwise
>> > ignore pseudo superblocks...
>>
>> Pseudo superblocks aren't ignored. They are added to super_blocks like
>> all others, and iterate_supers doesn't skip over them. The problem
>> is that blockdev_superblock doesn't have a proper s_bdi set, and thus
>> gets skipped over by __sync_filesystem.
> Yes. But even if it was not skipped writeback_inodes_sb() doesn't have
> one flusher thread to kick to actually do the writeout (since each inode on
> blockdev_superblock belongs to a different bdi). So it's perfectly fine we
> skip blockdev_superblock.
>
> If we want to fix the problem something like attached patch should do.
> Comments?
Your patch looks good to me, in that it does hit all the bdevs with
both WB_SYNC_NONE and SYNC_ALL. However, I still say that the call to
wakeup_flusher_threads() in sys_sync() is superfluous, at least as
long as writeback_inodes_sb() waits for completion of the work item
that it enqueues.
Thanks,
Curt
>
> Honza
>
> PS: While testing the patch, I've noticed that block device can have any
> dirty data only if it is still open (__blkdev_put() writes all dirty pages)
> so that somehow limits how much people can be burned by sync not writing
> out block devices...
> --
> Jan Kara <jack@xxxxxxx>
> SUSE Labs, CR
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/