Re: [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu

From: Stephane Eranian
Date: Fri Nov 26 2010 - 05:06:55 EST


On Fri, Nov 26, 2010 at 10:00 AM, Lin Ming <lin@xxxxxxx> wrote:
> On Fri, Nov 26, 2010 at 4:33 PM, Stephane Eranian <eranian@xxxxxxxxxx> wrote:
>> Lin,
>>
>> Looked at the perfmon code, and it seems the mask is actual
>> cores, not threads:
>> Â Â Â Â Â Â Â Ârdmsrl(MSR_NHM_UNC_GLOBAL_CTRL, val);
>> Â Â Â Â Â Â Â Âval |= 1ULL << (48 + cpu_data(smp_processor_id()).cpu_core_id);
>> Â Â Â Â Â Â Â Âwrmsrl(MSR_NHM_UNC_GLOBAL_CTRL, val);
>>
>> That seems to imply both threads will get the interrupt.
>>
>> In the the overflowed event was programmed from on of the two threads, that
>> means one will process the overflow, the other will get spurious.
>>
>> On the cores where no uncore was programmed, then both threads will have
>> a spurious interrupt.
>
> But in my test, if HT is on, only the 2 theads in one of the four cores
> will receive the interrupt. Even worse, we don't know which core will
> receive the interrupt
> when overflow happens.
>
The MSR_NHM_UNC_GLOBAL_CTRL is per socket not per core.

> I'll do more tests to verify this.

In your tests, are your programming the same uncore event
across all CPUs? If so then you may have a race condition
setting the MSR because it read-modify-write.

What about you program only one uncore event from one CPU?

>
>>
>> That brings up back to the 'spurious interrupt' issue and the 'NMI
>> Dazed' message
>> that Don tried to eliminate. Now we have a new situation where we will
>> get interrupt
>> with no work to do, so the perf_event will pass the interrupt onto the
>> next subsystem
>> and eventually we will get the 'dazed' message. I am just guessing here....
>
> Add Don.
>
> Thanks,
> Lin Ming
>
>>
>>
>> On Fri, Nov 26, 2010 at 9:18 AM, Stephane Eranian <eranian@xxxxxxxxxx> wrote:
>>> On Fri, Nov 26, 2010 at 6:15 AM, Lin Ming <lin@xxxxxxx> wrote:
>>>> On Tue, Nov 23, 2010 at 6:17 PM, Stephane Eranian <eranian@xxxxxxxxxx> wrote:
>>>>> Lin,
>>>>>
>>>>> On Sun, Nov 21, 2010 at 1:01 PM, Lin Ming <ming.m.lin@xxxxxxxxx> wrote:
>>>>>> +static void uncore_pmu_enable_all(void)
>>>>>> +{
>>>>>> + Â Â Â u64 ctrl;
>>>>>> +
>>>>>> + Â Â Â /*
>>>>>> + Â Â Â Â* (0xFULL << 48): 1 of the 4 cores can receive NMI each time
>>>>>> + Â Â Â Â* but we don't know which core will receive the NMI when overflow happens
>>>>>> + Â Â Â Â*/
>>>>>
>>>>> That does not sound right. If you set bit 48-51 to 1, then all 4 cores
>>>>> will receive EVERY
>>>>> interrupt, i.e., it's a broadcast. That seems to contradict your
>>>>> comment: 1 of the 4. Unless
>>>>> you meant, they all get the interrupt and one will handle it, the
>>>>> other will find nothing to
>>>>> process. But I don't see the atomic op that would make this true in
>>>>> uncore_handle_irq().
>>>>
>>>> Stephane,
>>>>
>>>> The interrupt model is strange, it behaves differently when HT on/off.
>>>>
>>>> If HT is off, all 4 cores will receive every interrupt, i.e., it's a broadcast.
>>>>
>>> That's if yo set the mask to 0xf, right?
>>>
>>> In the perf_event model, given that any one of the 4 cores can be used
>>> to program uncore events, you have no choice but to broadcast to all
>>> 4 cores. Each has to demultiplex and figure out which of its counters
>>> have overflowed.
>>>
>>>> If HT is on, only 1 of the 4 cores will receive the interrupt(both
>>>> Threads in that core receive the interrupt),
>>>> and it can't be determined which core will receive the interrupt.
>>>>
>>>> Did you ever observe this?
>>>>
>>> No because I never set more than one bit in the mask.
>>>
>>>> I tried to set the mask 0xff when HT is on, but kernel panics, because
>>>> the reserve bits are set.
>>>
>>> Let me check on this. It would seem to imply that in HT mode, both threads
>>> necessarily receive the interrupts.
>>>
>>> Was that on Nehalem or Westmere?
>>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/