On Thu, 19 Feb 2004 23:52:32, Nick Piggin wrote:...
Miquel van Smoorenburg wrote:
Note that this is not an issue of '2 processes writing to 1 file', really.pdflush is a process though, that's all that matters.
It's one process and pdflush writing the same dirty pages of the same file.
I understand that when the two processes are unrelated, the patch as I
sent it will do the wrong thing.
But the thing is, you get this:
- "dd" process writes requests
- pdflush triggers to write dirty pages
- too many pages are dirty so "dd" blocks as well to write synchronously
- "dd" process triggers "queue full" but gets marked as "batching" so
can continue (get_request)
- pdflush tries to submit one bio and gets blocked (get_request_wait)
- "dd" continues, but that one bio from pdflush remains stuck for a while
That's stupid, that one bio from pdflush should really be allowed on
the queue, since "dd" is adding requests from the same source to it
anyway.
Perhaps writes from pdflush should be handled differently to prevent
this specific case ?
Say, if pdflush adds request #128, don't mark it as batching, but
let it block. The next process will be the one marked as batching
and can continue. If pdflush tries to add a request > 128, allow it,
but _then_ block it.
Would something like that work ? Would it be a good idea to never mark
a pdflush process as batching, or would that have a negative impact
for some things ?