Re: [PATCH 3/3] zram: add mem_used_max via sysfs

From: Dan Streetman
Date: Thu Aug 14 2014 - 15:12:13 EST


On Thu, Aug 14, 2014 at 12:23 PM, David Horner <ds2horner@xxxxxxxxx> wrote:
> On Thu, Aug 14, 2014 at 11:32 AM, David Horner <ds2horner@xxxxxxxxx> wrote:
>> On Thu, Aug 14, 2014 at 11:09 AM, Dan Streetman <ddstreet@xxxxxxxx> wrote:
>>> On Wed, Aug 13, 2014 at 9:12 PM, Minchan Kim <minchan@xxxxxxxxxx> wrote:
>>>> - if (zram->limit_bytes &&
>>>> - zs_get_total_size_bytes(meta->mem_pool) > zram->limit_bytes) {
>>>> + total_bytes = zs_get_total_size_bytes(meta->mem_pool);
>>>> + if (zram->limit_bytes && total_bytes > zram->limit_bytes) {
>>>
>>> do you need to take the init_lock to read limit_bytes here? It could
>>> be getting changed between these checks...
>>
>> There is no real danger in freeing with an error.
>> It is more timing than a race.
> I probably should explain my reasoning.
>
> any changes between getting the total value and the limit test are not
> problematic (From race perspective).
>
> 1) If the actual total increases and the value returned under rates it, then
> a) if this.total exceeds the limit - no problem it is rolled back as
> it would if the actual total were used.
> b) if this.total <= limit OK - as other process will be dinged (it
> will see its own allocation)
>
> 2) If the actual total decreases and the value returned overrates
> rates it, then
> a) if this.value <= limit then allocation great (actual has even more room)
> b) if this.value > max it will be rolled back (as the other might be
> as well) and process can compete again.

actually I wasn't thinking of total_bytes changing, i think it's ok to
check the total at that specific point in time, for the reasons you
point out above.

I was thinking about zram->limit_bytes changing, especially if it's
possible to disable the limit (i.e. set it to 0), e.g.:

assume currently total_bytes == 1G and limit_bytes == 2G, so there is
not currently any danger of going over the limit. Then:


thread 1 : if (zram->limit_bytes ...this is true

thread 2 : zram->limit_bytes = limit; ...where limit == 0

thread 1 : && total_bytes > zram->limit_bytes) { ...this is now also true

thread 1 : incorrectly return -ENOMEM failure

It's very unlikely, and a single failure isn't a big deal here since
the caller must be prepared to handle a failure. And of course the
compiler might reorder those checks. And if it's not possible to
disable the limit by setting it to 0 (besides a complete reset of the
zram device, which wouldn't happen while this function is running),
then there's not an issue here (although, I think being able to
disable the limit without having to reset the zram device is useful).


Also for setting the max_used_bytes, isn't non-atomically setting a
u64 value dangerous (on <64 bit systems) when it's not synchronized
between threads?

That is, unless the entire zram_bvec_write() function or this section
is already thread-safe, and i missed it (which i may have :-)


>
> Is there a denial of service possible if 2.b repeats indefinitely.
> Yes, but how to set it up reliably? And it is no different than a
> single user exhausting the limit before any other users.
> Yes, it is potentially a live false limit exhaustion, with one process
> requesting an amount exceeding the limit but able to be allocated.
> But this is no worse than the rollback load we already have at the limit.
>
> It would be better to check before the zs_malloc if the concern is
> avoiding heavy processing in that function (as an optimization), as
> well as here.after allocation
>
> But I see no real race or danger doing this unlocked.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/