Re: [RESEND PATCH 3/6] perf/x86/intel/uncore: Extract codes of box ref/unref

From: Peter Zijlstra
Date: Wed May 08 2019 - 07:52:49 EST


On Tue, Apr 30, 2019 at 05:53:45PM -0700, kan.liang@xxxxxxxxxxxxxxx wrote:
> +static void uncore_box_unref(struct intel_uncore_type **types, int id)
> {
> + struct intel_uncore_type *type;
> struct intel_uncore_pmu *pmu;
> struct intel_uncore_box *box;
> + int i;
> +
> + for (; *types; types++) {
> + type = *types;
> + pmu = type->pmus;
> + for (i = 0; i < type->num_boxes; i++, pmu++) {
> + box = pmu->boxes[id];
> + if (box && atomic_dec_return(&box->refcnt) == 0)
> + uncore_box_exit(box);
> + }
> + }
> +}

> +static int uncore_box_ref(struct intel_uncore_type **types,
> + int id, unsigned int cpu)
> {
> + struct intel_uncore_type *type;
> struct intel_uncore_pmu *pmu;
> struct intel_uncore_box *box;
> + int i, ret;
>
> + ret = allocate_boxes(types, id, cpu);
> if (ret)
> return ret;
>
> @@ -1232,11 +1238,22 @@ static int uncore_event_cpu_online(unsigned int cpu)
> type = *types;
> pmu = type->pmus;
> for (i = 0; i < type->num_boxes; i++, pmu++) {
> + box = pmu->boxes[id];
> if (box && atomic_inc_return(&box->refcnt) == 1)
> uncore_box_init(box);
> }
> }
> + return 0;
> +}

This relies on all online/offline events to be globally serialized, and
they are. But that does make me wonder why we're using atomic_t.

Without the serialization there is an online-online race where the first
online sets the refcount to 1, lets the second online continue without
the first having completed the init().

Anyway, the code isn't wrong, just weird.