Re: Very slow clang kernel config ..

From: Theodore Ts'o
Date: Tue May 04 2021 - 20:55:54 EST


On Tue, May 04, 2021 at 07:04:56PM -0400, Greg Stark wrote:
> On Mon, 3 May 2021 at 10:39, Theodore Ts'o <tytso@xxxxxxx> wrote:
> >
> > That was because memory was *incredibly* restrictive in those days.
> > My first Linux server had one gig of memory, and so shared libraries
> > provided a huge performance boost --- because otherwise systems would
> > be swapping or paging their brains out.
>
> (I assume you mean 1 megabyte?)
> I have 16G and the way modern programs are written I'm still having
> trouble avoiding swap thrashing...

I corrected myself in a follow-on message; I had 16 megabytes of
memory, which was generous at the time. But it was still restrictive
enough that it made sense to have shared libraries for C library, X
Windows, etc.

> This is always a foolish argument though. Regardless of the amount of
> resources available we always want to use it as efficiently as
> possible. The question is not whether we have more memory today than
> before, but whether the time and power saved in reducing memory usage
> (and memory bandwidth usage) is more or less than other resource costs
> being traded off and whether that balance has changed.

It's always about engineering tradeoffs. We're always trading off
available CPU, memory, storage device speeds --- and also programmer
time and complexity. For example, C++ and stable ABI's really don't
go well together. So if you are using a large number of C++
libraries, the ability to maintain stable ABI's is ***much*** more
difficult. This was well understood decades ago --- there was an
Ottawa Linux Symposium presentation that discussed this in the context
of KDE two decades ago.

I'll also note that technology can also play a huge role here. Debian
for example is now much more capable of rebuilding all packages from
source with autobuilders. In addition, most desktops have easy access
to high speed network links, and are set up auto-update packages. In
that case, the argument that distributions have to have shared
libraries because otherwise it's too hard to rebuild all of the
binaries that statically linked against a shared library with a
security fix becomes much less compelling. It should be pretty simple
to set up a system where after a library gets a security update, the
distribution could automatically figure out which packages needs to be
automatically rebuilt, and rebuild them all.

> > However, these days, many if not most developers aren't capable of the
> > discpline needed to maintained the ABI stability needed for shared
> > libraries to work well.
>
> I would argue you have cause and effect reversed here. The reason
> developers don't understand ABI (or even API) compatibility is
> *because* they're used to people just static linking (or vendoring).
> If people pushed back the world would be a better place.

I'd argue is just that many upstream developers just don't *care*.
The incentives of an upstream developer and the distribution
maintainers are quite different. ABI compatibility doesn't bring much
benefits to upstream developers, and when you have a separation of
concerns between package maintenance and upstream development, it's
pretty inevitable.

I wear both hats for e2fsprogs as the upstream maintainer as well as
the Debian maintainer for that package, and I can definitely see the
differences in the points of view of those two roles.

Cheers,

- Ted