Re: [PATCH net-next 11/12] net: mvpp2: handle cases where more CPUs are available than s/w threads
From: Antoine Tenart
Date: Tue Oct 30 2018 - 09:54:08 EST
Marc,
On Mon, Oct 29, 2018 at 05:35:52PM +0000, Marc Zyngier wrote:
> On 19/09/18 10:27, Antoine Tenart wrote:
>
> Really??? How on Earth are you testing this code?
Thank you for the feedback.
> I came up with the following fix, which fixes the issue for me.
I did not test your fix, but it looks good and does fix a real issue.
You can send it to netdev.
Antoine
> From ca25785bd1a679e72ed77e939b19360bfd0eecea Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <marc.zyngier@xxxxxxx>
> Date: Mon, 29 Oct 2018 17:07:25 +0000
> Subject: [PATCH] net: mvpp2: Fix affinity hint allocation
>
> The mvpp2 driver has the curious behaviour of passing a stack variable
> to irq_set_affinity_hint(), which results in the kernel exploding
> the first time anyone accesses this information. News flash: userspace
> does, and irqbalance will happily take the machine down. Great stuff.
>
> An easy fix is to track the mask within the queue_vector structure,
> and to make sure it has the same lifetime as the interrupt itself.
>
> Fixes: e531f76757eb ("net: mvpp2: handle cases where more CPUs are available than s/w threads")
> Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx>
> ---
> drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 1 +
> .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 18 ++++++++++++++----
> 2 files changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> index 176c6b56fdcc..398328f10743 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> @@ -796,6 +796,7 @@ struct mvpp2_queue_vector {
> int nrxqs;
> u32 pending_cause_rx;
> struct mvpp2_port *port;
> + struct cpumask *mask;
> };
>
> struct mvpp2_port {
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> index 14f9679c957c..7a37a37e3fb3 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -3298,24 +3298,30 @@ static int mvpp2_irqs_init(struct mvpp2_port *port)
> for (i = 0; i < port->nqvecs; i++) {
> struct mvpp2_queue_vector *qv = port->qvecs + i;
>
> - if (qv->type == MVPP2_QUEUE_VECTOR_PRIVATE)
> + if (qv->type == MVPP2_QUEUE_VECTOR_PRIVATE) {
> + qv->mask = kzalloc(cpumask_size(), GFP_KERNEL);
> + if (!qv->mask) {
> + err = -ENOMEM;
> + goto err;
> + }
> +
> irq_set_status_flags(qv->irq, IRQ_NO_BALANCING);
> + }
>
> err = request_irq(qv->irq, mvpp2_isr, 0, port->dev->name, qv);
> if (err)
> goto err;
>
> if (qv->type == MVPP2_QUEUE_VECTOR_PRIVATE) {
> - unsigned long mask = 0;
> unsigned int cpu;
>
> for_each_present_cpu(cpu) {
> if (mvpp2_cpu_to_thread(port->priv, cpu) ==
> qv->sw_thread_id)
> - mask |= BIT(cpu);
> + cpumask_set_cpu(cpu, qv->mask);
> }
>
> - irq_set_affinity_hint(qv->irq, to_cpumask(&mask));
> + irq_set_affinity_hint(qv->irq, qv->mask);
> }
> }
>
> @@ -3325,6 +3331,8 @@ static int mvpp2_irqs_init(struct mvpp2_port *port)
> struct mvpp2_queue_vector *qv = port->qvecs + i;
>
> irq_set_affinity_hint(qv->irq, NULL);
> + kfree(qv->mask);
> + qv->mask = NULL;
> free_irq(qv->irq, qv);
> }
>
> @@ -3339,6 +3347,8 @@ static void mvpp2_irqs_deinit(struct mvpp2_port *port)
> struct mvpp2_queue_vector *qv = port->qvecs + i;
>
> irq_set_affinity_hint(qv->irq, NULL);
> + kfree(qv->mask);
> + qv->mask = NULL;
> irq_clear_status_flags(qv->irq, IRQ_NO_BALANCING);
> free_irq(qv->irq, qv);
> }
> --
> 2.19.1
>
>
> --
> Jazz is not dead. It just smells funny...
--
Antoine Ténart, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com