Re: [PATCH v2 2/4] irqdomain: Fix mapping-creation race
From: Marc Zyngier
Date: Thu Sep 15 2022 - 04:54:39 EST
Johan,
On Thu, 01 Sep 2022 15:28:14 +0100,
Johan Hovold <johan+linaro@xxxxxxxxxx> wrote:
>
> Parallel probing (e.g. due to asynchronous probing) of devices that share
> interrupts can currently result in two mappings for the same hardware
> interrupt to be created.
>
> Add a serialising mapping mutex so that looking for an existing mapping
> before creating a new one is done atomically.
>
> Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
> Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings")
> Cc: Dmitry Torokhov <dtor@xxxxxxxxxxxx>
> Cc: Jon Hunter <jonathanh@xxxxxxxxxx>
> Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@xxxxxxxxxxxxxxxxxxxx
> Signed-off-by: Johan Hovold <johan+linaro@xxxxxxxxxx>
> ---
> include/linux/irqdomain.h | 2 ++
> kernel/irq/irqdomain.c | 33 ++++++++++++++++++++++++---------
> 2 files changed, 26 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
> index 00d577f90883..8df9b9586e29 100644
> --- a/include/linux/irqdomain.h
> +++ b/include/linux/irqdomain.h
> @@ -144,6 +144,7 @@ struct irq_domain_chip_generic;
> * core code.
> * @flags: host per irq_domain flags
> * @mapcount: The number of mapped interrupts
> + * @map_mutex: Mapping lock
> *
> * Optional elements
> * @fwnode: Pointer to firmware node associated with the irq_domain. Pretty easy
> @@ -168,6 +169,7 @@ struct irq_domain {
> void *host_data;
> unsigned int flags;
> unsigned int mapcount;
> + struct mutex map_mutex;
>
> /* Optional data */
> struct fwnode_handle *fwnode;
> diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
> index 24ddd8d9b597..1af1d141e165 100644
> --- a/kernel/irq/irqdomain.c
> +++ b/kernel/irq/irqdomain.c
> @@ -215,6 +215,7 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
> /* Fill structure */
> INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL);
> mutex_init(&domain->revmap_mutex);
> + mutex_init(&domain->map_mutex);
> domain->ops = ops;
> domain->host_data = host_data;
> domain->hwirq_max = hwirq_max;
> @@ -721,14 +722,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
> return 0;
> }
>
> + mutex_lock(&domain->map_mutex);
> +
I must confess I have a hard time figuring out the semantic difference
between map_mutex and revmap_mutex. or rather, what is the use of
revmap_mutex once map_mutex is taken. They fundamentally overlap, and
I have the feeling one should eventually replace the other.
If anything, you should absolutely define/document how these two locks
interact.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.