Re: [RFC PATCH v2 3/4] hp: Implement Hazard Pointers
From: Joel Fernandes
Date: Fri Oct 04 2024 - 17:26:35 EST
On Fri, Oct 4, 2024 at 2:29 PM Mathieu Desnoyers
<mathieu.desnoyers@xxxxxxxxxxxx> wrote:
>
> This API provides existence guarantees of objects through Hazard
> Pointers (HP). This minimalist implementation is specific to use
> with preemption disabled, but can be extended further as needed.
>
> Each HP domain defines a fixed number of hazard pointer slots (nr_cpus)
> across the entire system.
>
> Its main benefit over RCU is that it allows fast reclaim of
> HP-protected pointers without needing to wait for a grace period.
>
> It also allows the hazard pointer scan to call a user-defined callback
> to retire a hazard pointer slot immediately if needed. This callback
> may, for instance, issue an IPI to the relevant CPU.
>
> There are a few possible use-cases for this in the Linux kernel:
>
> - Improve performance of mm_count by replacing lazy active mm by HP.
> - Guarantee object existence on pointer dereference to use refcount:
> - replace locking used for that purpose in some drivers,
> - replace RCU + inc_not_zero pattern,
> - rtmutex: Improve situations where locks need to be taken in
> reverse dependency chain order by guaranteeing existence of
> first and second locks in traversal order, allowing them to be
> locked in the correct order (which is reverse from traversal
> order) rather than try-lock+retry on nested lock.
>
> References:
>
> [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for
> lock-free objects," in IEEE Transactions on Parallel and
> Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004
[ ... ]
> ---
> Changes since v0:
> - Remove slot variable from hp_dereference_allocate().
> ---
> include/linux/hp.h | 158 +++++++++++++++++++++++++++++++++++++++++++++
> kernel/Makefile | 2 +-
> kernel/hp.c | 46 +++++++++++++
Just a housekeeping comment, ISTR Linus looking down on adding bodies
of C code to header files (like hp_dereference_allocate). I understand
maybe the rationale is that the functions included are inlined. But do
all of them have to be inlined? Such headers also hurt code browsing
capabilities in code browsers like clangd. clangd doesn't understand
header files because it can't independently compile them -- it uses
the compiler to generate and extract the AST for superior code
browsing/completion.
Also have you looked at the benefits of inlining for hp.h?
hp_dereference_allocate() seems large enough that inlining may not
matter much, but I haven't compiled it and looked at the asm myself.
Will continue staring at the code.
thanks,
- Joel
> 3 files changed, 205 insertions(+), 1 deletion(-)
> create mode 100644 include/linux/hp.h
> create mode 100644 kernel/hp.c
>
> diff --git a/include/linux/hp.h b/include/linux/hp.h
> new file mode 100644
> index 000000000000..e85fc4365ea2
> --- /dev/null
> +++ b/include/linux/hp.h
> @@ -0,0 +1,158 @@
> +// SPDX-FileCopyrightText: 2024 Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
> +//
> +// SPDX-License-Identifier: LGPL-2.1-or-later
> +
> +#ifndef _LINUX_HP_H
> +#define _LINUX_HP_H
> +
> +/*
> + * HP: Hazard Pointers
> + *
> + * This API provides existence guarantees of objects through hazard
> + * pointers.
> + *
> + * It uses a fixed number of hazard pointer slots (nr_cpus) across the
> + * entire system for each HP domain.
> + *
> + * Its main benefit over RCU is that it allows fast reclaim of
> + * HP-protected pointers without needing to wait for a grace period.
> + *
> + * It also allows the hazard pointer scan to call a user-defined callback
> + * to retire a hazard pointer slot immediately if needed. This callback
> + * may, for instance, issue an IPI to the relevant CPU.
> + *
> + * References:
> + *
> + * [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for
> + * lock-free objects," in IEEE Transactions on Parallel and
> + * Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004
> + */
> +
> +#include <linux/rcupdate.h>
> +
> +/*
> + * Hazard pointer slot.
> + */
> +struct hp_slot {
> + void *addr;
> +};
> +
> +/*
> + * Hazard pointer context, returned by hp_use().
> + */
> +struct hp_ctx {
> + struct hp_slot *slot;
> + void *addr;
> +};
> +
> +/*
> + * hp_scan: Scan hazard pointer domain for @addr.
> + *
> + * Scan hazard pointer domain for @addr.
> + * If @retire_cb is NULL, wait to observe that each slot contains a value
> + * that differs from @addr.
> + * If @retire_cb is non-NULL, invoke @callback for each slot containing
> + * @addr.
> + */
> +void hp_scan(struct hp_slot __percpu *percpu_slots, void *addr,
> + void (*retire_cb)(int cpu, struct hp_slot *slot, void *addr));
> +
> +/* Get the hazard pointer context address (may be NULL). */
> +static inline
> +void *hp_ctx_addr(struct hp_ctx ctx)
> +{
> + return ctx.addr;
> +}
> +
> +/*
> + * hp_allocate: Allocate a hazard pointer.
> + *
> + * Allocate a hazard pointer slot for @addr. The object existence should
> + * be guaranteed by the caller. Expects to be called from preempt
> + * disable context.
> + *
> + * Returns a hazard pointer context.
> + */
> +static inline
> +struct hp_ctx hp_allocate(struct hp_slot __percpu *percpu_slots, void *addr)
> +{
> + struct hp_slot *slot;
> + struct hp_ctx ctx;
> +
> + if (!addr)
> + goto fail;
> + slot = this_cpu_ptr(percpu_slots);
> + /*
> + * A single hazard pointer slot per CPU is available currently.
> + * Other hazard pointer domains can eventually have a different
> + * configuration.
> + */
> + if (READ_ONCE(slot->addr))
> + goto fail;
> + WRITE_ONCE(slot->addr, addr); /* Store B */
> + ctx.slot = slot;
> + ctx.addr = addr;
> + return ctx;
> +
> +fail:
> + ctx.slot = NULL;
> + ctx.addr = NULL;
> + return ctx;
> +}
> +
> +/*
> + * hp_dereference_allocate: Dereference and allocate a hazard pointer.
> + *
> + * Returns a hazard pointer context. Expects to be called from preempt
> + * disable context.
> + */
> +static inline
> +struct hp_ctx hp_dereference_allocate(struct hp_slot __percpu *percpu_slots, void * const * addr_p)
> +{
> + void *addr, *addr2;
> + struct hp_ctx ctx;
> +
> + addr = READ_ONCE(*addr_p);
> +retry:
> + ctx = hp_allocate(percpu_slots, addr);
> + if (!hp_ctx_addr(ctx))
> + goto fail;
> + /* Memory ordering: Store B before Load A. */
> + smp_mb();
> + /*
> + * Use RCU dereference without lockdep checks, because
> + * lockdep is not aware of HP guarantees.
> + */
> + addr2 = rcu_access_pointer(*addr_p); /* Load A */
> + /*
> + * If @addr_p content has changed since the first load,
> + * clear the hazard pointer and try again.
> + */
> + if (!ptr_eq(addr2, addr)) {
> + WRITE_ONCE(ctx.slot->addr, NULL);
> + if (!addr2)
> + goto fail;
> + addr = addr2;
> + goto retry;
> + }
> + /*
> + * Use addr2 loaded from rcu_access_pointer() to preserve
> + * address dependency ordering.
> + */
> + ctx.addr = addr2;
> + return ctx;
> +
> +fail:
> + ctx.slot = NULL;
> + ctx.addr = NULL;
> + return ctx;
> +}
> +
> +/* Retire the hazard pointer in @ctx. */
> +static inline
> +void hp_retire(const struct hp_ctx ctx)
> +{
> + smp_store_release(&ctx.slot->addr, NULL);
> +}
> +
> +#endif /* _LINUX_HP_H */
> diff --git a/kernel/Makefile b/kernel/Makefile
> index 3c13240dfc9f..ec16de96fa80 100644
> --- a/kernel/Makefile
> +++ b/kernel/Makefile
> @@ -7,7 +7,7 @@ obj-y = fork.o exec_domain.o panic.o \
> cpu.o exit.o softirq.o resource.o \
> sysctl.o capability.o ptrace.o user.o \
> signal.o sys.o umh.o workqueue.o pid.o task_work.o \
> - extable.o params.o \
> + extable.o params.o hp.o \
> kthread.o sys_ni.o nsproxy.o \
> notifier.o ksysfs.o cred.o reboot.o \
> async.o range.o smpboot.o ucount.o regset.o ksyms_common.o
> diff --git a/kernel/hp.c b/kernel/hp.c
> new file mode 100644
> index 000000000000..b2447bf15300
> --- /dev/null
> +++ b/kernel/hp.c
> @@ -0,0 +1,46 @@
> +// SPDX-FileCopyrightText: 2024 Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
> +//
> +// SPDX-License-Identifier: LGPL-2.1-or-later
> +
> +/*
> + * HP: Hazard Pointers
> + */
> +
> +#include <linux/hp.h>
> +#include <linux/percpu.h>
> +
> +/*
> + * hp_scan: Scan hazard pointer domain for @addr.
> + *
> + * Scan hazard pointer domain for @addr.
> + * If @retire_cb is non-NULL, invoke @callback for each slot containing
> + * @addr.
> + * Wait to observe that each slot contains a value that differs from
> + * @addr before returning.
> + */
> +void hp_scan(struct hp_slot __percpu *percpu_slots, void *addr,
> + void (*retire_cb)(int cpu, struct hp_slot *slot, void *addr))
> +{
> + int cpu;
> +
> + /*
> + * Store A precedes hp_scan(): it unpublishes addr (sets it to
> + * NULL or to a different value), and thus hides it from hazard
> + * pointer readers.
> + */
> +
> + if (!addr)
> + return;
> + /* Memory ordering: Store A before Load B. */
> + smp_mb();
> + /* Scan all CPUs slots. */
> + for_each_possible_cpu(cpu) {
> + struct hp_slot *slot = per_cpu_ptr(percpu_slots, cpu);
> +
> + if (retire_cb && smp_load_acquire(&slot->addr) == addr) /* Load B */
> + retire_cb(cpu, slot, addr);
> + /* Busy-wait if node is found. */
> + while ((smp_load_acquire(&slot->addr)) == addr) /* Load B */
> + cpu_relax();
> + }
> +}
> --
> 2.39.2
>