Re: [PATCH v3 10/11] perf/x86/intel: Perform rotation on Intel CQM RMIDs

From: Peter Zijlstra
Date: Mon Nov 10 2014 - 15:58:57 EST


On Mon, Nov 10, 2014 at 08:43:53PM +0000, Matt Fleming wrote:
> On Fri, 07 Nov, at 01:06:12PM, Peter Zijlstra wrote:
> > On Thu, Nov 06, 2014 at 12:23:21PM +0000, Matt Fleming wrote:
> > > +/*
> > > + * Exchange the RMID of a group of events.
> > > + */
> > > +static unsigned int
> > > +intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
> > > +{
> > > + struct perf_event *event;
> > > + unsigned int old_rmid = group->hw.cqm_rmid;
> > > + struct list_head *head = &group->hw.cqm_group_entry;
> > > +
> > > + lockdep_assert_held(&cache_mutex);
> > > +
> > > + /*
> > > + * If our RMID is being deallocated, perform a read now.
> > > + */
> > > + if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
> > > + struct intel_cqm_count_info info;
> > > +
> > > + local64_set(&group->count, 0);
> > > + info.event = group;
> > > +
> > > + preempt_disable();
> > > + smp_call_function_many(&cqm_cpumask, __intel_cqm_event_count,
> > > + &info, 1);
> > > + preempt_enable();
> > > + }
> >
> > This suffers the same issue as before, why not call that one function
> > and not reimplement it?
> >
> > Also, I don't think we'd ever swap an rmid for another valid one, right?
> > So we could do this read/update unconditionally.
>
> No, we never swap a valid RMID for another valid one, but we do make a
> invalid -> valid transition, so doing the read wouldn't make sense
> in that situation.

Ah indeed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/