Re: [PATCHv5 4/8] zswap: add to mm/
From: Seth Jennings
Date: Mon Feb 18 2013 - 14:24:50 EST
On 02/15/2013 10:04 PM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
<snip>
>> + * The statistics below are not protected from concurrent access for
>> + * performance reasons so they may not be a 100% accurate. However,
>> + * the do provide useful information on roughly how many times a
>
> s/the/they
Ah yes, thanks :)
>
>> + * certain event is occurring.
>> +*/
>> +static u64 zswap_pool_limit_hit;
>> +static u64 zswap_reject_compress_poor;
>> +static u64 zswap_reject_zsmalloc_fail;
>> +static u64 zswap_reject_kmemcache_fail;
>> +static u64 zswap_duplicate_entry;
>> +
>> +/*********************************
>> +* tunables
>> +**********************************/
>> +/* Enable/disable zswap (disabled by default, fixed at boot for
>> now) */
>> +static bool zswap_enabled;
>> +module_param_named(enabled, zswap_enabled, bool, 0);
>
> please document in Documentation/kernel-parameters.txt.
Will do.
>
>> +
>> +/* Compressor to be used by zswap (fixed at boot for now) */
>> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
>> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
>> +module_param_named(compressor, zswap_compressor, charp, 0);
>
> ditto
ditto
>
>> +
<snip>
>> +/* invalidates all pages for the given swap type */
>> +static void zswap_frontswap_invalidate_area(unsigned type)
>> +{
>> + struct zswap_tree *tree = zswap_trees[type];
>> + struct rb_node *node, *next;
>> + struct zswap_entry *entry;
>> +
>> + if (!tree)
>> + return;
>> +
>> + /* walk the tree and free everything */
>> + spin_lock(&tree->lock);
>> + node = rb_first(&tree->rbroot);
>> + while (node) {
>> + entry = rb_entry(node, struct zswap_entry, rbnode);
>> + zs_free(tree->pool, entry->handle);
>> + next = rb_next(node);
>> + zswap_entry_cache_free(entry);
>> + node = next;
>> + }
>> + tree->rbroot = RB_ROOT;
>
> Why don't need rb_erase for every nodes?
We are freeing the entire tree here. try_to_unuse() in the swapoff
syscall should have already emptied the tree, but this is here for
completeness.
rb_erase() will do things like rebalancing the tree; something that
just wastes time since we are in the process of freeing the whole
tree. We are holding the tree lock here so we are sure that no one
else is accessing the tree while it is in this transient broken state.
Thanks,
Seth
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/