Re: [PATCH]mm/vmalloc: interchage the implementation ofvmalloc_to_{pfn,page}

From: Jianyu Zhan
Date: Thu Nov 28 2013 - 13:09:52 EST



Hi, Vladimir,

On Fri, Nov 29, 2013 at 1:41 AM, Vladimir Murzin <murzin.v@xxxxxxxxx> wrote:
>
> Any numbers for efficiency?
>

For the original implementation, vmalloc_to_pfn() wraps the vmalloc_to_page(),
which means

pfn ------> struct page ------> pfn
| |
vmalloc_to_page() vmalloc_to_pfn()

So this patch interchange the implementation, do the dirty page table
walking code in vmalloc_to_pfn(), and then vmalloc_to_page() uses it, the graph
now becomes

pfn ------> struct page
| |
vmalloc_to_pfn() vmalloc_to_page()


>> /*
>> - * Walk a vmap address to the struct page it maps.
>> + * Walk a vmap address to the physical pfn it maps to.
>> */
>> -struct page *vmalloc_to_page(const void *vmalloc_addr)
>> +unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
>> {
>> unsigned long addr = (unsigned long) vmalloc_addr;
>> - struct page *page = NULL;
>> + unsigned long pfn;
>
> uninitialized pfn will lead to a bug.
>

Why? The coding pratice has mandates we use it after we initialize it,
And if we initialize it , to what value will it promise no bug?
It is unlikely a rubbish initial value will creep in.


>> /*
>> @@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>> ptep = pte_offset_map(pmd, addr);
>> pte = *ptep;
>> if (pte_present(pte))
>> - page = pte_page(pte);
>> + pfn = pte_page(pte);
>
> page_to_pfn is missed here.
>
> Have you ever tested there is no functional changes?

Oh, gods. My fault. It did has no functional changes.

I just sent the incorrect patch...

it should be
- page = pte_page(pte);
+ pfn = pte_pfn(pte);;

Here is the resent patch:


---
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0fdf968..e4f0db2 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -220,12 +220,12 @@ int is_vmalloc_or_module_addr(const void *x)
}

/*
- * Walk a vmap address to the struct page it maps.
+ * Walk a vmap address to the physical pfn it maps to.
*/
-struct page *vmalloc_to_page(const void *vmalloc_addr)
+unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
{
unsigned long addr = (unsigned long) vmalloc_addr;
- struct page *page = NULL;
+ unsigned long pfn = 0;
pgd_t *pgd = pgd_offset_k(addr);

/*
@@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
ptep = pte_offset_map(pmd, addr);
pte = *ptep;
if (pte_present(pte))
- page = pte_page(pte);
+ pfn = pte_pfn(pte);
pte_unmap(ptep);
}
}
}
- return page;
+ return pfn;
}
-EXPORT_SYMBOL(vmalloc_to_page);
+EXPORT_SYMBOL(vmalloc_to_pfn);

/*
- * Map a vmalloc()-space virtual address to the physical page frame number.
+ * Map a vmalloc()-space virtual address to the struct page.
*/
-unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
+struct page *vmalloc_to_page(const void *vmalloc_addr)
{
- return page_to_pfn(vmalloc_to_page(vmalloc_addr));
+ return pfn_to_page(vmalloc_to_pfn(vmalloc_addr));
}
-EXPORT_SYMBOL(vmalloc_to_pfn);
+EXPORT_SYMBOL(vmalloc_to_page);


/*** Global kva allocator ***/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/