Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm
From: Alex Shi
Date: Wed May 16 2012 - 04:55:15 EST
On 05/16/2012 04:04 PM, Peter Zijlstra wrote:
> On Wed, 2012-05-16 at 10:00 +0200, Peter Zijlstra wrote:
>> On Wed, 2012-05-16 at 14:46 +0800, Alex Shi wrote:
>>> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
>>> index 75e888b..ed6642a 100644
>>> --- a/include/asm-generic/tlb.h
>>> +++ b/include/asm-generic/tlb.h
>>> @@ -86,6 +86,8 @@ struct mmu_gather {
>>> #ifdef CONFIG_HAVE_RCU_TABLE_FREE
>>> struct mmu_table_batch *batch;
>>> #endif
>>> + unsigned long start;
>>> + unsigned long end;
>>> unsigned int need_flush : 1, /* Did free PTEs */
>>> fast_mode : 1; /* No batching */
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 6105f47..b176172 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm)
>>> tlb->mm = mm;
>>>
>>> tlb->fullmm = fullmm;
>>> + tlb->start = -1UL;
>>> + tlb->end = 0;
>>> tlb->need_flush = 0;
>>> tlb->fast_mode = (num_possible_cpus() == 1);
>>> tlb->local.next = NULL;
>
> Also, you just broke compilation on a bunch of archs.. again.
Sorry. Do you mean not every archs use 'include/asm-generic/tlb.h', so
the assignment of tlb->start in tlb_gather_mmu make trouble?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/