Re: [PATCH 00/14] alpha: cleanups for 6.10

From: Maciej W. Rozycki
Date: Mon Jun 03 2024 - 07:09:43 EST


On Thu, 30 May 2024, Linus Torvalds wrote:

> > > The 21064 actually did atomicity with an external pin on the bus, the
> > > same way people used to do before caches even existed.
> >
> > Umm, 8086's LOCK#, anyone?
>
> Well, yes and no.
>
> So yes, exactly like 8086 did before having caches.

Well I wrote 8086 specifically, not x86.

> But no, not like the alpha contemporary PPro that did have caches. The
> PPro already did locked cycles in the caches.

But the 21064 does predate the PPro by a couple of years: Feb 1992 vs Nov
1995, so surely Intel folks had extra time to resolve this stuff properly.

Conversely the R4000 came about in Oct 1991, so before the 21064. But
only so slightly and not as much as I remembered (I thought the 21064 was
more like 1993), so it seems like DEC couldn't have had enough time after
all to figure out what SGI did (patents notwithstanding). Surely the
R4000MC cache coherency protocol was complex for the silicon technology of
the time, but it's just MOESI in modern terms AFAICT, and LL/SC is handled
there (and is in fact undefined for uncached accesses).

I'm not sure what else was out there at the time, but going back to x86
the i486 was contemporary, the original write-through cache version, which
if memory serves, was not any better in this respect (and the "write-back
enhanced" DX2/DX4 models with proper MESI cache protocol came out much
later, after Pentium only, which they borrowed from).

> So I really feel the 21064 was broken.
>
> It's probably related to the whole cache coherency being designed to
> be external to the built-in caches - or even the Bcache. The caches
> basically are write-through, and the weak memory ordering was designed
> for allowing this horrible model.

In retrospect perhaps it wasn't the best design, but they have learnt
from their mistakes.

> > > In fact, it's worse than "not thread safe". It's not even safe on UP
> > > with interrupts, or even signals in user space.
> >
> > Ouch, I find it a surprising oversight.
>
> The sad part is that it doesn't seem to have been an oversight. It
> really was broken-as-designed.
>
> Basically, the CPU was designed for single-threaded Spec benchmarks
> and absolutely nothing else. Classic RISC where you recompile to fix
> problems like the atomicity thing - "just use a 32-bit sig_atomic_t
> and you're fine")

Not OK however, as you correctly point out, for plain ordinary non-atomic
stuff. Point me at any document that claims that a pair of threads poking
at even and odd byte vector elements each is not allowed. Caches may not
enjoy it, but there's nothing AFAIK saying this is UB or whatever.

> The original alpha architecture handbook makes a big deal of how
> clever the lack of byte and word operations is. I also remember

I've seen that; dropped in v3 with the addition of the BWX extension.

> reading an article by Dick Sites - one of the main designers - talking
> a lot about how the lack of byte operations is great, and encourages
> vectorizing byte accesses and doing string operations in whole words.

Yeah, the software folks at DEC must have been delighted porting all the
VAX VMS software. But pehaps this was the last attempt to try something
different from the CPU architecture standards established back in 1970s
(by the VAX among others) that make current designs so similar to one
another.

Anyway, back to my point. A feasible solution non-intrusive for Linux
and low-overhead for GCC has been found. I can expedite implementation
and I'll see if I can regression-test it too, but I may have to rely on
other people to complete it after all, as I haven't been prepared for this
effort in the light of certain issues I have recently suffered from in my
lab.

Is that going to be enough to bring the platform bits back?

FAOD, with all the hacks so eagerly being removed now happily left in the
dust bin where they belong, and which I wholeheartedly agree with: we
shouldn't be suffering from design mistakes of systems that are no longer
relevant, but I fail to see the reason why we should disallow their use
where the burden is confined or plain elsewhere.

For example we continue supporting old UP MIPS platforms that predate
LL/SC, by just trapping and emulating these instructions. Surely it sucks
performance-wise and it's possibly hundreds of cycles too, but it works
and the burden is confined to the exception handler, so not a big deal.

Maciej