Re: [RFC][PATCHv4 6/7] zsmalloc: account the number of compacted pages

From: Sergey Senozhatsky
Date: Wed Jul 01 2015 - 22:13:52 EST


On (07/01/15 16:29), Sergey Senozhatsky wrote:
> if (putback_zspage(.. src_page))
> pool->num_migrated++;

pool->num_migrated += class->pages_per_zspage;
Of course.

> (c) or we can check src_page fullness (or simply if src_page->inuse == 0)
> in __zs_compact() and increment ->num_migrated for ZS_EMPTY page. But this
> is what free_zspage() already does.

In other words, something like this (and we don't need nr_migrated in
zs_compact_control anymore). Not a real patch, just to demonstrate the
idea.

---

@@ -1596,8 +1596,6 @@ struct zs_compact_control {
/* Starting object index within @s_page which used for live object
* in the subpage. */
int index;
- /* How many of objects were migrated */
- int nr_migrated;
};

static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -1634,7 +1632,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
record_obj(handle, free_obj);
unpin_tag(handle);
obj_free(pool, class, used_obj);
- cc->nr_migrated++;
}

/* Remember last position in this iteration */
@@ -1720,7 +1717,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
struct page *src_page;
struct page *dst_page = NULL;

- cc.nr_migrated = 0;
spin_lock(&class->lock);
while ((src_page = isolate_source_page(class))) {

@@ -1748,6 +1744,9 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
if (dst_page == NULL)
break;

+ if (!src_page->inuse)
+ pool->num_migrated += class->pages_per_zspage;
+
putback_zspage(pool, class, dst_page);
putback_zspage(pool, class, src_page);
spin_unlock(&class->lock);
@@ -1758,8 +1757,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
if (src_page)
putback_zspage(pool, class, src_page);

- pool->num_migrated += cc.nr_migrated;
-
spin_unlock(&class->lock);
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/