On Thu, May 01, 2008 at 12:07:53PM -0700, david@xxxxxxx wrote:On Thu, 1 May 2008, Willy Tarreau wrote:
I suspect that they would not have, and if I'm right the result of merging
half as much wouldn't be twice as many releases, but rather approximatly
the same release schedule with more piling up for the next release.
no, this is exactly what *not* to do. Linus is right about the risk of
getting more stuff at once. If we merge less things, we *must* be able
to speed up the process. Half the patches to cross-check in half the
time should be easier than all patches in full time. The time to fix
a problem within N patches is O(N^2).
even individual git trees that do get a fair bit of testing (like
networking for example) run into odd and hard to debug problems when
exposed to a wider set of hardware and loads. having the networking
changes go in every 4 months (with 4 months worth of changes) instead of
every 2 months (with 2 months worth of changes) will just mean that there
will be more problems in this area, and since they will be more
concentrated in that area it will be harder to fix them all fast as the
same group of people are needed for all of them.
You're perfectly right and that's exactly not what I'm proposing. BTW,
having two halves will also get more of the merge job done the side of
developers, where testing is being done before submission. So in the
end, we should also get *less* regressions caused by each submission.
if several maintainers think that you are correct that doing a merge with
far fewer changes will be a lot faster, they can test this in the real
world by skipping one release. just send Linus a 'no changes this time'
instead of a pull request. If you are right the stable release will happen
significantly faster and they can say 'I told you so' and in the next
release have a fair chance of convincing other maintainers to skip a
release.
again, this cannot work because this would result in slowing them down,
and it's not what I'm proposing.