Re: [GIT PULL] fscache: I/O API modernisation and netfs helper library

From: David Howells
Date: Sun Feb 14 2021 - 19:32:44 EST


Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:

> But no, it's not a replacement for actual code review after the fact.
>
> If you think email has too long latency for review, and can't use
> public mailing lists and cc the people who are maintainers, then I
> simply don't want your patches.

I think we were talking at cross-purposes by the term "development" here. I
was referring to the discussion of how the implementation should be done and
working closely with colleagues - both inside and outside Red Hat - to get
things working, not specifically the public review side of things. It's just
that I don't have a complete record of the how-to-implement-it, the
how-to-get-various-bits-working-together and the why-is-it-not-working?
discussions.

Anyway, I have posted my fscache modernisation patches multiple times for
public review, I have tried to involve the wider community in aspects of the
development on public mailing lists and I have been including the maintainers
in to/cc.

I've posted the more full patchset for public review a number of times:

4th May 2020:
https://lore.kernel.org/linux-fsdevel/158861203563.340223.7585359869938129395.stgit@xxxxxxxxxxxxxxxxxxxxxx/

13th Jul (split into three subsets):
https://lore.kernel.org/linux-fsdevel/159465766378.1376105.11619976251039287525.stgit@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/159465784033.1376674.18106463693989811037.stgit@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/159465821598.1377938.2046362270225008168.stgit@xxxxxxxxxxxxxxxxxxxxxx/

20th Nov:
https://lore.kernel.org/linux-fsdevel/160588455242.3465195.3214733858273019178.stgit@xxxxxxxxxxxxxxxxxxxxxx/

I then cut it down and posted that publically a couple of times:

20th Jan:
https://lore.kernel.org/linux-fsdevel/161118128472.1232039.11746799833066425131.stgit@xxxxxxxxxxxxxxxxxxxxxx/

25th Jan:
https://lore.kernel.org/linux-fsdevel/161161025063.2537118.2009249444682241405.stgit@xxxxxxxxxxxxxxxxxxxxxx/

I let you know what was coming here:
https://lore.kernel.org/linux-fsdevel/447452.1596109876@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/2522190.1612544534@xxxxxxxxxxxxxxxxxxxxxx/

to try and find out whether you were going to have any objections to the
design in advance, rather than at the last minute.

I've apprised people of what I was up to:
https://lore.kernel.org/lkml/24942.1573667720@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/2758811.1610621106@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/1441311.1598547738@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/160655.1611012999@xxxxxxxxxxxxxxxxxxxxxx/

Asked for consultation on parts of what I wanted to do:
https://lore.kernel.org/linux-fsdevel/3326.1579019665@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/4467.1579020509@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/3577430.1579705075@xxxxxxxxxxxxxxxxxxxxxx/

Asked someone who is actually using fscache in production to test the rewrite:
https://listman.redhat.com/archives/linux-cachefs/2020-December/msg00000.html

I've posted partial patches to try and help 9p and cifs along:
https://lore.kernel.org/linux-fsdevel/1514086.1605697347@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-cifs/1794123.1605713481@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/241017.1612263863@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-cifs/270998.1612265397@xxxxxxxxxxxxxxxxxxxxxx/

(Jeff has been handling Ceph and Dave NFS).

Proposed conference topics related to this:
https://lore.kernel.org/linux-fsdevel/9608.1575900019@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/14196.1575902815@xxxxxxxxxxxxxxxxxxxxxx/
https://lore.kernel.org/linux-fsdevel/364531.1579265357@xxxxxxxxxxxxxxxxxxxxxx/

though the lockdown put paid to that:-(

Willy has discussed it too:
https://lore.kernel.org/linux-fsdevel/20200826193116.GU17456@xxxxxxxxxxxxxxxxxxxx/

David