Re: About 'hash' event trigger patchset

From: Tom Zanussi
Date: Wed Apr 02 2014 - 10:52:15 EST


Hi Namhyung,

On Wed, 2014-04-02 at 08:31 +0000, Namhyung Kim wrote:
> Hi Tom,
>
> (Sorry for replying in another thread, I have a problem on mail settings)
>
> I've just read your hash event trigger series, and want to give some feedback.
>
> At first, the change log of 5/5 is actually a better documentation
> than in patch 4/5 so I think it should be added to the doc also. But

Good point, I'll do that (after trimming it down a bit).

> the syntax of the hash trigger should be look like:
>
> - # echo hash:key(s):value(s)[:sort_keys()][ if filter] > event/trigger
> + # echo hash:key(s):value(s)[:"sort="sort_key(s)][ if filter] > event/trigger
>
> Also on first example in the changelog of 5/5, key should be
> 'stacktrace' instead of 'call_site'.
>

Yeah, I realized that just after posting - will fix.

> As far as I see in the code, the sort key can receive an optional
> descending/ascending modifier, but it's not documented.
>

I knew I was forgetting something ;-) Thanks for pointing it out.

> One thing I noticed in the main logic is that it seems there's no
> limit checking when adding/creating new entry. In
> hash_trigger_entry_create(), there's a check against max_entries but
> if it goes beyond the max, it'd just access a NULL pointer AFAICS. Am
> I missing something? Also I don't know what the difference between
> ->n_entries and ->total_entries (in hash_data).
>
> I guess you wanted to set ->drops in that case, but I cannot find

Yes, the code is missing a very important snippet, which I realized
after hitting the problem. My current code has this:

if (hash_data->drops)
return NULL;
else if (hash_data->n_entries == hash_data->max_entries) {
hash_data->drops = 1;
return NULL;
}

n_entries is the current number of entries used up, and max_entries is
the total number of available entries (a cached value to avoid
calculating it every time).

> where it gets set. And I'm not sure it's good to check ->drop first,
> since entry can find an existing entry and merged to it even if it
> reached the max already.
>

The assumption is that if you have any drops at all, you probably want
to redo the test with a bigger table, but regardless the data reflects
the situation up to the point the drops started happening. Letting
events that already have a entry merge while rejecting those that don't
would invalidate the data you already have.

Thanks for taking a look and for your helpful comments,

Tom

> Thanks,
> Namhyung


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/