On Sun, Jun 17, 2007 at 03:17:58PM +0200, Michal Piotrowski wrote:
> On 17/06/07, Adrian Bunk <bunk@xxxxxxxxx> wrote:
>...
>> Fine with me, but:
>>
>> There are not so simple cases like big infrastructure patches with
>> 20 other patches in the tree depending on it causing a regression, or
>> even worse, a big infrastructure patch exposing a latent old bug in some
>> completely different area of the kernel.
>
> It is different case.
>
> "If the patch introduces a new regression"
>
> introduces != exposes an old bug
My remark was meant as a note "this sentence can't handle all
regressions" (and for a user it doesn't matter whether a new
regression is introduced or an old regression exposed).
It could be we simply agree on this one. ;-)
> Removal of 20 patches will be painful, but sometimes you need to
> "choose minor evil to prevent a greater one" [1].
>
>> And we should be aware that reverting is only a workaround for the real
>> problem which lies in our bug handling.
>...
And this is something I want to emphasize again.
How can we make any progress with the real problem and not only the
symptoms?
There's now much money in the Linux market, and the kernel quality
problems might result in real costs in the support of companies like
IBM, SGI, Redhat or Novell (plus it harms the Linux image which might
result in lower revenues).
If [1] this is true, it might even pay pay off for them to each assign
X man hours per month of experienced kernel developers to upstream
kernel bug handling?
This is just a wild thought and it might be nonsense - better
suggestions for solving our quality problems would be highly welcome...