Re: [PATCH 2/2] zram: do not count duplicated pages as compressed

From: Sergey Senozhatsky
Date: Tue May 16 2017 - 01:45:39 EST


On (05/16/17 14:26), Minchan Kim wrote:
[..]
> > + /*
> > + * Free memory associated with this sector
> > + * before overwriting unused sectors.
> > + */
> > + zram_slot_lock(zram, index);
> > + zram_free_page(zram, index);
>
> Hmm, zram_free should happen only if the write is done successfully.
> Otherwise, we lose the valid data although write IO was fail.

but would this be correct? the data is not valid - we failed to store
the valid one. but instead we assure application that read()/swapin/etc.,
depending on the usage scenario, is successful (even though the data is
not what application really expects to see), application tries to use the
data from that page and probably crashes (dunno, for example page contained
hash tables with pointers that are not valid anymore, etc. etc.).

I'm not optimistic about stale data reads; it basically will look like
data corruption to the application.

> > +
> > if (zram_same_page_write(zram, index, page))
> > - return 0;
> > + goto out_unlock;
> >
> > entry = zram_dedup_find(zram, page, &checksum);
> > if (entry) {
> > comp_len = entry->len;
> > + zram_set_flag(zram, index, ZRAM_DUP);
>
> In case of hitting dedup, we shouldn't increase compr_data_size.

no, we should not. you are right. my "... patch" is incomplete and
wrong. please don't pay too much attention to it.


> If we fix above two problems, do you think it's still cleaner?
> (I don't mean to be reluctant with your suggestion. Just a
> real question to know your thought.:)

do you mean code duplication and stale data read?

I'd probably prefer to address stale data reads separately.
but it seems that stale reads fix will re-do parts of your
0002 patch and, at least potentially, reduce code duplication.

so we can go with your 0002 and then stale reads fix will try
to reduce code duplication (unless we want to have 4 places doing
the same thing :) )

-ss