[PATCH] x86: Use clflush() instead of wbinvd() whenever possible when changing mapping

From: Thomas Hellstrom
Date: Fri Jul 24 2009 - 03:53:13 EST


The current code uses wbinvd() when the area to flush is > 4MB. Although this
may be faster than using clflush() the effect of wbinvd() on irq latencies
may be catastrophical on systems with large caches. Therefore use clflush()
whenever possible and accept the slight performance hit.

Signed-off-by: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
---
arch/x86/mm/pageattr.c | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 1b734d7..d4327db 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -209,13 +209,12 @@ static void cpa_flush_array(unsigned long *start, int numpages, int cache,
int in_flags, struct page **pages)
{
unsigned int i, level;
- unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */

BUG_ON(irqs_disabled());

- on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1);
+ on_each_cpu(__cpa_flush_all, (void *) 0UL, 1);

- if (!cache || do_wbinvd)
+ if (!cache)
return;

/*
--
1.6.1.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/