Re: Should Linux set the new constant-time mode CPU flags?

From: Ard Biesheuvel
Date: Wed Oct 26 2022 - 13:01:30 EST


On Thu, 15 Sept 2022 at 19:52, Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
>
> On Tue, Aug 30, 2022 at 07:25:29AM -0700, Dave Hansen wrote:
> > On 8/29/22 09:39, Jason A. Donenfeld wrote:
> > > On Thu, Aug 25, 2022 at 11:15:58PM +0000, Eric Biggers wrote:
> > >> I'm wondering if people are aware of this issue, and whether anyone has any
> > >> thoughts on whether/where the kernel should be setting these new CPU flags.
> > >> There don't appear to have been any prior discussions about this. (Thanks to
> > > Maybe it should be set unconditionally now, until we figure out how to
> > > make it more granular.
> >
> > Personally, I'm in this camp as well. Let's be safe and set it by
> > default. There's also this tidbit in the Intel docs (and chopping out a
> > bunch of the noise):
> >
> > (On) processors based on microarchitectures before Ice Lake ...
> > the instructions listed here operate as if DOITM is enabled.
> >
> > IOW, setting DOITM=0 isn't going back to the stone age. At worst, I'd
> > guess that you're giving up some optimization that only shows up in very
> > recent CPUs in the first place.
> >
> > If folks want DOITM=1 on their snazzy new CPUs, then they came come with
> > performance data to demonstrate the gain they'll get from adding kernel
> > code to get DOITM=1. There are a range of ways we could handle it, all
> > the way from adding a command-line parameter to per-task management.
> >
> > Anybody disagree?
>
> It's not my preferred option for arm64 but I admit the same reasoning
> could equally apply to us. If some existing crypto libraries relied on
> data independent timing for current CPUs but newer ones (with the DIT
> feature) come up with more aggressive, data-dependent optimisations,
> they may be caught off-guard. That said the ARM architecture spec never
> promised any timing, that's a micro-architecture detail and not all
> implementations are done by ARM Ltd. So I can't really tell what's out
> there.
>
> So I guess knobs for finer grained control would do, at least a sysctl
> (or cmdline) to turn it on/off globally and maybe a prctl() for user. We
> don't necessarily need this on arm64 but if x86 adds one, we might as
> well wire it up.
>

With all the effort spent on plugging timing leaks in the kernel over
the past couple of years, not enabling this at EL1 seems silly, no?
Why would we ever permit privileged code to exhibit data dependent
timing variances?

As for a prctl() for user space - wouldn't it make more sense to
enable this by default, and add a hwcap so user space can clear DIT
directly if it feels the need to do so?