Re: [PATCH v7 12/12] crypto: x86/aes-kl - Implement the AES-XTS algorithm

From: Chang S. Bae
Date: Tue May 30 2023 - 16:50:13 EST


On 5/26/2023 12:23 AM, Eric Biggers wrote:
On Wed, May 24, 2023 at 09:57:17AM -0700, Chang S. Bae wrote:
== API Limitation ==

The setkey() function transforms an AES key into a handle. But, an
extended key is a usual outcome of setkey() in other AES cipher
implementations. For this reason, a setkey() failure does not fall
back to the other. So, expose AES-KL methods via synchronous
interfaces only.

I don't understand what this paragraph is trying to say.

This text comes with this particular comment as I look back:

> This basically implies that we cannot expose the cipher interface at
> all, and so AES-KL can only be used by callers that use the
> asynchronous interface, which rules out 802.11, s/w kTLS, macsec and
> kerberos.

https://lore.kernel.org/lkml/CAMj1kXGa4f21eH0mdxd1pQsZMUjUr1Btq+Dgw-gC=O-yYft7xw@xxxxxxxxxxxxxx/

Then, I realize that at that moment the dm-crypt use model was not clearly out yet.

This seems to be carried over the versions. But, now, it has XTS only. Then, this becomes less relevant which makes confusion I guess.

I think this can go away as claiming the usage clearly now.


+/*
+ * The below wrappers for the encryption/decryption functions
+ * incorporate the feature availability check:
+ *
+ * In the rare event of hardware failure, the wrapping key can be lost
+ * after wake-up from a deep sleep state. Then, this check helps to
+ * avoid any subsequent misuse with populating a proper error code.
+ */
+
+static inline int aeskl_enc(const void *ctx, u8 *out, const u8 *in)
+{
+ if (!valid_keylocker())
+ return -ENODEV;
+
+ return __aeskl_enc(ctx, out, in);
+}

Is it not sufficient for the valid_keylocker() check to occur at the top level
(in xts_encrypt() and xts_decrypt()), which would seem to be a better place to
do it? Is this because valid_keylocker() needs to be checked in every
kernel_fpu_begin() section separately, to avoid a race condition? If that's
indeed the reason, can you explain that in the comment?

Maybe something like this:

/*
* In the event of hardware failure, the wrapping key can be lost
* from a sleep state. Then, the feature is not usable anymore. This
* feature state can be found via valid_keylocker().
*
* Such disabling could be anywhere preemptible, outside
* kernel_fpu_begin()/end(). So, to avoid the race condition, check
* the feature availability on every use in the below wrappers.
*/


+static inline int xts_keylen(struct skcipher_request *req, u32 *keylen)
+{
+ struct aes_xts_ctx *ctx = aes_xts_ctx(crypto_skcipher_reqtfm(req));
+
+ if (ctx->crypt_ctx.key_length != ctx->tweak_ctx.key_length)
+ return -EINVAL;
+
+ *keylen = ctx->crypt_ctx.key_length;
+ return 0;
+}

This is odd. Why would the key lengths be different here?

I thought it was logical to do such sanity check. But, in practice, they are the same.

Looks like this entire crypto code is treated as performance-critical or so.


+ err = simd_register_skciphers_compat(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers),
+ aeskl_simd_skciphers);
+ if (err)
+ return err;
+
+ return 0;

This can be simplified to:

return simd_register_skciphers_compat(aeskl_skciphers,
ARRAY_SIZE(aeskl_skciphers),
aeskl_simd_skciphers);

Oh, obviously!

Thanks,
Chang