Re: [F.A.Q.] the advantages of a shared tool/kernel Gitrepository, tools/perf/ and tools/kvm/

From: Arnaldo Carvalho de Melo
Date: Tue Nov 08 2011 - 07:56:59 EST


Em Tue, Nov 08, 2011 at 05:21:50AM -0500, Theodore Tso escreveu:
>
> On Nov 8, 2011, at 4:32 AM, Ingo Molnar wrote:
> >
> > No ifs and when about it, these are the plain facts:
> >
> > - Better features, better ABIs: perf maintainers can enforce clean,
> > functional and usable tooling support *before* committing to an
> > ABI on the kernel side.

> "We don't have to be careful about breaking interface compatibility
> while we are developing new features".

My normal working environment is an MRG PREEMPT_RT kernel (2.6.33.9,
test kernels based on 3.0+) running on enterprise distros while I
develop the userspace part.

So no, at least for me, I don't keep updating the kernel part while
developing userspace.

> The flip side of this is that it's not obvious when an interface is
> stable, and when it is still subject to change. It makes life much
> harder for any userspace code that doesn't live in the kernel. And I
> think we do agree that moving all of userspace into a single git tree
> makes no sense, right?

Right, but that is the extreme as well, right?

> > - We have a shared Git tree with unified, visible version control. I
> > can see kernel feature commits followed by tooling support, in a
> > single flow of related commits:
> >
> > perf probe: Update perf-probe document
> > perf probe: Support --del option
> > trace-kprobe: Support delete probe syntax
> >
> > With two separate Git repositories this kind of connection between
> > the tool and the kernel is inevitably weakened or lost.

> "We don't have to clearly document new interfaces between kernel and
> userspace, and instead rely on git commit order for people to figure
> out what's going on with some new interface"

Indeed, documentation is lacking, I think coming from a kernel
standpoint I relied too much in the "documentation is source code"
mantra of old days.

But I realize its a necessity and also that regression testing is as
well another necessity.

I introduced 'perf test' for this later need and rejoice everytime
people submit new test cases, like Jiri and Han did in the past, its
just that we need more of both, documentation and regression testing.

Unfortunately that is not so sexy and I have my hands full not just with
perf :-\

> > - Easier development, easier testing: if you work on a kernel
> > feature and on matching tooling support then it's *much* easier to
> > work in a single tree than working in two or more trees in
> > parallel. I have worked on multi-tree features before, and except
> > special exceptions they are generally a big pain to develop.

> I've developed in the split tree systems, and it's really not that
> hard. It does mean you have to be explicit about designing interfaces
> up front, and then you have to have a good, robust way of negotiating
> what features are in the kernel, and what features are supposed by the
> userspace --- but if you don't do that then having good backwards and
> forwards compatibility between different versions of the tool simply
> doesn't exist.

> So at the end of the day it question is whether you want to be able to
> (for example) update e2fsck to get better ability to fix more file
> system corruptions, without needing to upgrade the kernel. If you
> want to be able to use a newer, better e2fsck with an older,
> enterprise kernel, then you have use certain programming disciplines.
> That's where the work is, not in whether you have to maintain two git
> trees or a single git tree.

But it can as well be achieved with a single tree, or do you think
having a single tree makes that impossible to achieve? As I said I do
development basically using the split model at least for testing new
tools on older kernels.

People using the tools while developing mostly the kernel or both
kperf/uperf components do the test on the combined kernel + perf
sources.

> > - We are using and enforcing established quality control and coding
> > principles of the kernel project. If we mess up then Linus pushes
> > back on us at the last line of defense - and has pushed back on us
> > in the past. I think many of the currently external kernel
> > utilities could benefit from the resulting rise in quality.
> > I've seen separate tool projects degrade into barely usable
> > tinkerware - that i think cannot happen to perf, regardless of who
> > maintains it in the future.

> That's basically saying that if you don't have someone competent
> managing the git tree and providing quality assurance, life gets hard.
> Sure. But at the same time, does it scale to move all of userspace
> under one git tree and depending on Linus to push back?

8 or 80 again :-\

> I mean, it would have been nice to move all of GNOME 3 under the Linux
> kernel, so Linus could have pushed back on behalf of all of us power

Sheesh, all of gnome? How closely related and used in kernel development
is gnome? gnome 3?

> users, but as much as many of us would have appreciated someone being
> able to push back against the insanity which is the GNOME design
> process, is that really a good enough excuse to move all of GNOME 3
> into the kernel source tree? :-)

No, but again, you're taking it to the extreme.

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/