Re: [F.A.Q.] the advantages of a shared tool/kernel Git repository, tools/perf/ and tools/kvm/
From: Theodore Tso
Date: Tue Nov 08 2011 - 05:22:35 EST
On Nov 8, 2011, at 4:32 AM, Ingo Molnar wrote:
>
> No ifs and when about it, these are the plain facts:
>
> - Better features, better ABIs: perf maintainers can enforce clean,
> functional and usable tooling support *before* committing to an
> ABI on the kernel side.
"We don't have to be careful about breaking interface compatibility while we are developing new features".
The flip side of this is that it's not obvious when an interface is stable, and when it is still subject to change. It makes life much harder for any userspace code that doesn't live in the kernel. And I think we do agree that moving all of userspace into a single git tree makes no sense, right?
> - We have a shared Git tree with unified, visible version control. I
> can see kernel feature commits followed by tooling support, in a
> single flow of related commits:
>
> perf probe: Update perf-probe document
> perf probe: Support --del option
> trace-kprobe: Support delete probe syntax
>
> With two separate Git repositories this kind of connection between
> the tool and the kernel is inevitably weakened or lost.
"We don't have to clearly document new interfaces between kernel and userspace, and instead rely on git commit order for people to figure out what's going on with some new interface"
> - Easier development, easier testing: if you work on a kernel
> feature and on matching tooling support then it's *much* easier to
> work in a single tree than working in two or more trees in
> parallel. I have worked on multi-tree features before, and except
> special exceptions they are generally a big pain to develop.
I've developed in the split tree systems, and it's really not that hard. It does mean you have to be explicit about designing interfaces up front, and then you have to have a good, robust way of negotiating what features are in the kernel, and what features are supposed by the userspace --- but if you don't do that then having good backwards and forwards compatibility between different versions of the tool simply doesn't exist.
So at the end of the day it question is whether you want to be able to (for example) update e2fsck to get better ability to fix more file system corruptions, without needing to upgrade the kernel. If you want to be able to use a newer, better e2fsck with an older, enterprise kernel, then you have use certain programming disciplines. That's where the work is, not in whether you have to maintain two git trees or a single git tree.
> - We are using and enforcing established quality control and coding
> principles of the kernel project. If we mess up then Linus pushes
> back on us at the last line of defense - and has pushed back on us
> in the past. I think many of the currently external kernel
> utilities could benefit from the resulting rise in quality.
> I've seen separate tool projects degrade into barely usable
> tinkerware - that i think cannot happen to perf, regardless of who
> maintains it in the future.
That's basically saying that if you don't have someone competent managing the git tree and providing quality assurance, life gets hard. Sure. But at the same time, does it scale to move all of userspace under one git tree and depending on Linus to push back?
I mean, it would have been nice to move all of GNOME 3 under the Linux kernel, so Linus could have pushed back on behalf of all of us power users, but as much as many of us would have appreciated someone being able to push back against the insanity which is the GNOME design process, is that really a good enough excuse to move all of GNOME 3 into the kernel source tree? :-)
> - Better debuggability: sometimes a combination of a perf
> change in combination with a kernel change causes a breakage. I
> have bisected the shared tree a couple of times already, instead
> of having to bisect a (100,000 commits x 10,000 commits) combined
> space which much harder to debug …
What you are describing happens when someone hasn't been careful about their kernel/userspace interfaces.
If you have been rigorous with your interfaces, this isn't really an issue. When's the last time we've had to do a NxM exhaustive testing to find a broken sys call ABI between (for example) the kernel and MySQL?
> - Code reuse: we can and do share source code between the kernel and
> the tool where it makes sense. Both the tooling and the kernel
> side code improves from this. (Often explicit librarization makes
> little sense due to the additional maintenance overhead of a split
> library project and the impossibly long latency of how the kernel
> can rely on the ready existence of such a newly created library
> project.)
How much significant code really can get shared? Memory allocation is different between kernel and userspace code, how you do I/O is different, error reporting conventions are generally different, etc. You might have some serialization and deserialization code which is in common, but (surprise!) that's generally part of your interface, which is hopefully relatively stable especially once the tool and the interface has matured.
-- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/