Re: C aggregate passing (Rust kernel policy)

From: H. Peter Anvin
Date: Sat Feb 22 2025 - 18:51:34 EST


On February 22, 2025 1:22:08 PM PST, Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote:
>On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
>> VLIW and OoO might seem orthogonal, but they aren't – because they are
>> trying to solve the same problem, combining them either means the OoO
>> engine can't do a very good job because of false dependencies (if you
>> are scheduling molecules) or you have to break them instructions down
>> into atoms, at which point it is just a (often quite inefficient) RISC
>> encoding. In short, VLIW *might* make sense when you are statically
>> scheduling a known pipeline, but it is basically a dead end for
>> evolution – so unless you can JIT your code for each new chip
>> generation...
>
>JITing for each chip generation would be a part of any serious new VLIW
>effort. It's plenty doable in the open source world and the gains are
>too big to ignore.
>
>> But OoO still is more powerful, because it can do *dynamic*
>> scheduling. A cache miss doesn't necessarily mean that you have to
>> stop the entire machine, for example.
>
>Power hungry and prone to information leaks, though.
>

I think I know a thing or two about JITting for VLIW.. and so does someone else in this thread ;)