Re: [PATCH] x86: set_highmem_pages_init() cleanup

From: Ingo Molnar
Date: Wed Mar 04 2009 - 13:01:41 EST



* Ingo Molnar <mingo@xxxxxxx> wrote:

>
> * Pekka Enberg <penberg@xxxxxxxxxxxxxx> wrote:
>
> > On Tue, 2009-03-03 at 13:15 +0100, Ingo Molnar wrote:
> > > * Pekka Enberg <penberg@xxxxxxxxxxxxxx> wrote:
> > >
> > > > From: Pekka Enberg <penberg@xxxxxxxxxxxxxx>
> > > >
> > > > Impact: cleanup
> > > >
> > > > This patch moves set_highmem_pages_init() to arch/x86/mm/highmem_32.c. The
> > > > declaration of the function is kept in asm/numa_32.h because asm/highmem.h is
> > > > included only CONFIG_HIGHMEM is enabled so we can't put the empty static inline
> > > > function there.
> > > >
> > > > Signed-off-by: Pekka Enberg <penberg@xxxxxxxxxxxxxx>
> > > > ---
> > > > arch/x86/include/asm/numa_32.h | 6 +++++-
> > > > arch/x86/mm/highmem_32.c | 34 ++++++++++++++++++++++++++++++++++
> > > > arch/x86/mm/init_32.c | 12 ------------
> > > > arch/x86/mm/numa_32.c | 26 --------------------------
> > > > 4 files changed, 39 insertions(+), 39 deletions(-)
> > >
> > > Applied, thanks!
> > >
> > > One question:
> > >
> > > > diff --git a/arch/x86/include/asm/numa_32.h b/arch/x86/include/asm/numa_32.h
> > > > index e9f5db7..a372290 100644
> > > > --- a/arch/x86/include/asm/numa_32.h
> > > > +++ b/arch/x86/include/asm/numa_32.h
> > > > @@ -4,8 +4,12 @@
> > > > extern int pxm_to_nid(int pxm);
> > > > extern void numa_remove_cpu(int cpu);
> > > >
> > > > -#ifdef CONFIG_NUMA
> > > > +#ifdef CONFIG_HIGHMEM
> > > > extern void set_highmem_pages_init(void);
> > > > +#else
> > > > +static inline void set_highmem_pages_init(void)
> > > > +{
> > > > +}
> > > > #endif
> > > >
> > > > #endif /* _ASM_X86_NUMA_32_H */
> > > > diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
> > > > index bcc079c..13a823c 100644
> > > > --- a/arch/x86/mm/highmem_32.c
> > > > +++ b/arch/x86/mm/highmem_32.c
> > > > @@ -1,5 +1,6 @@
> > > > #include <linux/highmem.h>
> > > > #include <linux/module.h>
> > > > +#include <linux/swap.h> /* for totalram_pages */
> > > >
> > > > void *kmap(struct page *page)
> > > > {
> > > > @@ -156,3 +157,36 @@ EXPORT_SYMBOL(kmap);
> > > > EXPORT_SYMBOL(kunmap);
> > > > EXPORT_SYMBOL(kmap_atomic);
> > > > EXPORT_SYMBOL(kunmap_atomic);
> > > > +
> > > > +#ifdef CONFIG_NUMA
> > > > +void __init set_highmem_pages_init(void)
> > > > +{
> > > > + struct zone *zone;
> > > > + int nid;
> > > > +
> > > > + for_each_zone(zone) {
> > > > + unsigned long zone_start_pfn, zone_end_pfn;
> > > > +
> > > > + if (!is_highmem(zone))
> > > > + continue;
> > > > +
> > > > + zone_start_pfn = zone->zone_start_pfn;
> > > > + zone_end_pfn = zone_start_pfn + zone->spanned_pages;
> > > > +
> > > > + nid = zone_to_nid(zone);
> > > > + printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n",
> > > > + zone->name, nid, zone_start_pfn, zone_end_pfn);
> > > > +
> > > > + add_highpages_with_active_regions(nid, zone_start_pfn,
> > > > + zone_end_pfn);
> > > > + }
> > > > + totalram_pages += totalhigh_pages;
> > > > +}
> > > > +#else
> > > > +static void __init set_highmem_pages_init(void)
> > > > +{
> > > > + add_highpages_with_active_regions(0, highstart_pfn, highend_pfn);
> > > > +
> > > > + totalram_pages += totalhigh_pages;
> > > > +}
> > > > +#endif /* CONFIG_NUMA */
> > >
> > > Couldnt we just wrap the !NUMA case into the NUMA case and just
> > > have the NUMA function present, by making for_each_zone() a
> > > two-entries matter and making the second entry contain:
> > >
> > > zone->zone_start_pfn := highstart_pfn
> > > zone->spanned_pages := highend_pfn-highstart_pfn
> >
> > OK, I don't quite understand your suggestion here. The zones
> > are set up completely at this stage so AFAICT, even for the
> > UMA case, there should be a highmem zone there. So unless I am
> > missing something here, I think we can get away with something
> > as simple as the following.
>
> ok, good - it is an easier cleanup than i hoped it would be :-)

s/hoped/thought

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/