* Avi Kivity<avi@xxxxxxxxxx> wrote:
It's very simple: because the contribution latencies and overhead compound,The moment any change (be it as trivial as fixing a GUI detail or asWhy is that?
complex as a new feature) involves two or more packages, development speed
slows down to a crawl - while the complexity of the change might be very
low!
almost inevitably.
If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
...
Even with the best-run projects in existence it takes forever and is very
painful - and here i talk about first hand experience over many years.
I the maintainers of all packages are cooperative and responsive, then theI'm afraid practice is different from the rosy ideal you paint there. Even
patches will get accepted quickly. If they aren't, development will be
slow. [...]
with assumed 'perfect projects' there's always random differences between
projects, causing doubled (tripled) overhead and compounded up overhead:
- random differences in release schedules
- random differences in contribution guidelines
- random differences in coding style
[...] It isn't any different from contributing to two unrelated kernelYou mention a perfect example: contributing to multipe kernel subsystems. Even
subsystems (which are in fact in different repositories until the next merge
window).
_that_ is very noticeably harder than contributing to a single subsystem - due
to the inevitable buerocratic overhead, due to different development trees,
due to different merge criteria.
So you are underlining my point (perhaps without intending to): treating
closely related bits of technology as a single project is much better.
Obviously arch/x86/kvm/, virt/ and tools/kvm/ should live in a single
development repository (perhaps micro-differentiated by a few topical
branches), for exactly those reasons you mention.