Re: [PATCH v3 10/11] perf/x86/intel: Perform rotation on Intel CQM RMIDs

From: Peter Zijlstra
Date: Fri Nov 07 2014 - 07:06:27 EST


On Thu, Nov 06, 2014 at 12:23:21PM +0000, Matt Fleming wrote:
> +/*
> + * Exchange the RMID of a group of events.
> + */
> +static unsigned int
> +intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
> +{
> + struct perf_event *event;
> + unsigned int old_rmid = group->hw.cqm_rmid;
> + struct list_head *head = &group->hw.cqm_group_entry;
> +
> + lockdep_assert_held(&cache_mutex);
> +
> + /*
> + * If our RMID is being deallocated, perform a read now.
> + */
> + if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
> + struct intel_cqm_count_info info;
> +
> + local64_set(&group->count, 0);
> + info.event = group;
> +
> + preempt_disable();
> + smp_call_function_many(&cqm_cpumask, __intel_cqm_event_count,
> + &info, 1);
> + preempt_enable();
> + }

This suffers the same issue as before, why not call that one function
and not reimplement it?

Also, I don't think we'd ever swap an rmid for another valid one, right?
So we could do this read/update unconditionally.

> +
> + raw_spin_lock_irq(&cache_lock);
> +
> + group->hw.cqm_rmid = rmid;
> + list_for_each_entry(event, head, hw.cqm_group_entry)
> + event->hw.cqm_rmid = rmid;
> +
> + raw_spin_unlock_irq(&cache_lock);
> +
> + return old_rmid;
> +}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/