[PATCH v2 4/4] crypto/pcrypt: Do not use isolated CPUs for callback
From: Leonardo Bras
Date: Thu Oct 13 2022 - 14:50:13 EST
Currently pcrypt_aead_init_tfm() will pick callback cpus (ctx->cb_cpu)
from any online cpus. Later padata_reorder() will queue_work_on() the
chosen cb_cpu.
This is undesired if the chosen cb_cpu is listed as isolated (i.e. using
isolcpus=... or nohz_full=... kernel parameters), since the work queued
will interfere with the workload on the isolated cpu.
Make sure isolated cpus are not used for pcrypt.
Signed-off-by: Leonardo Bras <leobras@xxxxxxxxxx>
---
crypto/pcrypt.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index 9d10b846ccf73..0162629a03957 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -16,6 +16,7 @@
#include <linux/kobject.h>
#include <linux/cpu.h>
#include <crypto/pcrypt.h>
+#include <linux/sched/isolation.h>
static struct padata_instance *pencrypt;
static struct padata_instance *pdecrypt;
@@ -175,13 +176,15 @@ static int pcrypt_aead_init_tfm(struct crypto_aead *tfm)
struct pcrypt_instance_ctx *ictx = aead_instance_ctx(inst);
struct pcrypt_aead_ctx *ctx = crypto_aead_ctx(tfm);
struct crypto_aead *cipher;
+ const cpumask_t *hk_wq = housekeeping_cpumask(HK_TYPE_WQ);
cpu_index = (unsigned int)atomic_inc_return(&ictx->tfm_count) %
- cpumask_weight(cpu_online_mask);
+ cpumask_weight_and(hk_wq, cpu_online_mask);
- ctx->cb_cpu = cpumask_first(cpu_online_mask);
+ ctx->cb_cpu = cpumask_first_and(hk_wq, cpu_online_mask);
for (cpu = 0; cpu < cpu_index; cpu++)
- ctx->cb_cpu = cpumask_next(ctx->cb_cpu, cpu_online_mask);
+ ctx->cb_cpu = cpumask_next_and(ctx->cb_cpu, hk_wq,
+ cpu_online_mask);
cipher = crypto_spawn_aead(&ictx->spawn);
--
2.38.0