Re: RFC: dirty_ratio back to 40%
From: KOSAKI Motohiro
Date: Thu May 20 2010 - 21:12:10 EST
> > So, I'd prefer to restore the default rather than both Redhat and SUSE apply exactly
> > same distro specific patch. because we can easily imazine other users will face the same
> > issue in the future.
>
> On desktop systems the low dirty limits help maintain interactive feel.
> Users expect applications that are saving data to be slow. They do not
> like it when every application in the system randomly comes to a halt
> because of one program stuffing data up to the dirty limit.
really?
Do you mean our per-task dirty limit wouldn't works?
If so, I think we need fix it. IOW sane per-task dirty limitation seems independent issue
from per-system dirty limit.
> The cause and effect for the system slowdown is clear when the dirty
> limit is low. "I saved data and now the system is slow until it is
> done." When the dirty page ratio is very high, the cause and effect is
> disconnected. "I was just web surfing and the system came to a halt."
>
> I think we should expect server admins to do more tuning than desktop
> users, so the default limits should stay low in my opinion.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/