Re: [PATCH 1/1] mm/vmalloc: convert vmap_lazy_nr to atomic_long_t

From: Uladzislau Rezki
Date: Mon Feb 04 2019 - 05:50:17 EST


Hello, Michal.

On Fri, Feb 01, 2019 at 01:45:28PM +0100, Michal Hocko wrote:
> On Thu 31-01-19 17:24:52, Uladzislau Rezki (Sony) wrote:
> > vmap_lazy_nr variable has atomic_t type that is 4 bytes integer
> > value on both 32 and 64 bit systems. lazy_max_pages() deals with
> > "unsigned long" that is 8 bytes on 64 bit system, thus vmap_lazy_nr
> > should be 8 bytes on 64 bit as well.
>
> But do we really need 64b number of _pages_? I have hard time imagine
> that we would have that many lazy pages to accumulate.
>
That is more about of using the same type of variables thus the same size
in 32/64 bit address space.

<snip>
static void free_vmap_area_noflush(struct vmap_area *va)
{
int nr_lazy;

nr_lazy = atomic_add_return((va->va_end - va->va_start) >> PAGE_SHIFT,
&vmap_lazy_nr);
...
if (unlikely(nr_lazy > lazy_max_pages()))
try_purge_vmap_area_lazy();
<snip>

va_end/va_start are "unsigned long" whereas atomit_t(vmap_lazy_nr) is "int".
The same with lazy_max_pages(), it returns "unsigned long" value.

Answering your question, in 64bit, the "vmalloc" address space is ~8589719406
pages if PAGE_SIZE is 4096, i.e. a regular 4 byte integer is not enough to hold
it. I agree it is hard to imagine, but it also depends on physical memory a
system has, it has to be terabytes. I am not sure if such systems exists.

Thank you.

--
Vlad Rezki

> >
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
> > ---
> > mm/vmalloc.c | 20 ++++++++++----------
> > 1 file changed, 10 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index abe83f885069..755b02983d8d 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -632,7 +632,7 @@ static unsigned long lazy_max_pages(void)
> > return log * (32UL * 1024 * 1024 / PAGE_SIZE);
> > }
> >
> > -static atomic_t vmap_lazy_nr = ATOMIC_INIT(0);
> > +static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0);
> >
> > /*
> > * Serialize vmap purging. There is no actual criticial section protected
> > @@ -650,7 +650,7 @@ static void purge_fragmented_blocks_allcpus(void);
> > */
> > void set_iounmap_nonlazy(void)
> > {
> > - atomic_set(&vmap_lazy_nr, lazy_max_pages()+1);
> > + atomic_long_set(&vmap_lazy_nr, lazy_max_pages()+1);
> > }
> >
> > /*
> > @@ -658,10 +658,10 @@ void set_iounmap_nonlazy(void)
> > */
> > static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
> > {
> > + unsigned long resched_threshold;
> > struct llist_node *valist;
> > struct vmap_area *va;
> > struct vmap_area *n_va;
> > - int resched_threshold;
> >
> > lockdep_assert_held(&vmap_purge_lock);
> >
> > @@ -681,16 +681,16 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
> > }
> >
> > flush_tlb_kernel_range(start, end);
> > - resched_threshold = (int) lazy_max_pages() << 1;
> > + resched_threshold = lazy_max_pages() << 1;
> >
> > spin_lock(&vmap_area_lock);
> > llist_for_each_entry_safe(va, n_va, valist, purge_list) {
> > - int nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
> > + unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
> >
> > __free_vmap_area(va);
> > - atomic_sub(nr, &vmap_lazy_nr);
> > + atomic_long_sub(nr, &vmap_lazy_nr);
> >
> > - if (atomic_read(&vmap_lazy_nr) < resched_threshold)
> > + if (atomic_long_read(&vmap_lazy_nr) < resched_threshold)
> > cond_resched_lock(&vmap_area_lock);
> > }
> > spin_unlock(&vmap_area_lock);
> > @@ -727,10 +727,10 @@ static void purge_vmap_area_lazy(void)
> > */
> > static void free_vmap_area_noflush(struct vmap_area *va)
> > {
> > - int nr_lazy;
> > + unsigned long nr_lazy;
> >
> > - nr_lazy = atomic_add_return((va->va_end - va->va_start) >> PAGE_SHIFT,
> > - &vmap_lazy_nr);
> > + nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >>
> > + PAGE_SHIFT, &vmap_lazy_nr);
> >
> > /* After this point, we may free va at any time */
> > llist_add(&va->purge_list, &vmap_purge_list);
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs