Re: Crash when IO is being submitted and block size is changed
From: Jeff Moyer
Date: Thu Jul 19 2012 - 09:33:43 EST
Mikulas Patocka <mpatocka@xxxxxxxxxx> writes:
> On Tue, 17 Jul 2012, Jeff Moyer wrote:
>
>> > This is the patch that fixes this crash: it takes a rw-semaphore around
>> > all direct-IO path.
>> >
>> > (note that if someone is concerned about performance, the rw-semaphore
>> > could be made per-cpu --- take it for read on the current CPU and take it
>> > for write on all CPUs).
>>
>> Here we go again. :-) I believe we had at one point tried taking a rw
>> semaphore around GUP inside of the direct I/O code path to fix the fork
>> vs. GUP race (that still exists today). When testing that, the overhead
>> of the semaphore was *way* too high to be considered an acceptable
>> solution. I've CC'd Larry Woodman, Andrea, and Kosaki Motohiro who all
>> worked on that particular bug. Hopefully they can give better
>> quantification of the slowdown than my poor memory.
>>
>> Cheers,
>> Jeff
>
> Both down_read and up_read together take 82 ticks on Core2, 69 ticks on
> AMD K10, 62 ticks on UltraSparc2 if the target is in L1 cache. So, if
> percpu rw_semaphores were used, it would slow down only by this amount.
Sorry, I'm not familiar with per-cpu rw semaphores. Where are they
implemented?
> I hope that Linux developers are not so obsessed with performance that
> they want a fast crashing kernel rather than a slow reliable kernel.
> Note that anything that changes a device block size (for example
> mounting a filesystem with non-default block size) may trigger a crash
> if lvm or udev reads the device simultaneously; the crash really
> happened in business environment).
I wasn't suggesting that we leave the problem unfixed (though I can see
how you might have gotten that idea, sorry for not being more clear). I
was merely suggesting that we should try to fix the problem in a way
that does not kill performance.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/