Yeah; although I'm not sure if its an implementation or specification
problem. But as it exists it is of very limited use.
Fundamentally our events (with exception of event groups) are
independent. Events should always count, except when the PMI is running
-- so as to not include the measurement overhead in the measurement
itself. But this (mis)feature stops the entire PMU as soon as a single
counter overflows, inhibiting all other counters from running (as they
should) until the PMI has happened and reset the state.
(Note that, strictly speaking, we even expect the overflowing counter to
continue counting until the PMI happens. Having an overflow should not
mean we loose events. A sampling and !sampling event should produce the
same event count.)
So even when there's only a single event (group) scheduled, it isn't
strictly right. And when there's multiple events scheduled it is
definitely wrong.
And while I understand the purpose of the current semantics; it makes a
single event group sample count more coherent, the fact that is looses
events just bugs me something fierce -- and as shown, it breaks tools.