Re: [ANNOUNCE] kmemcheck v7

From: Vegard Nossum
Date: Sat May 10 2008 - 08:02:33 EST


Hi!

On Sat, May 10, 2008 at 1:04 PM, Bart Van Assche
<bart.vanassche@xxxxxxxxx> wrote:
>> Bart Van Assche wrote:
>>>
>>> It's a bit late but I finally found out about your announcement of
>>> kmemcheck version 7. Are you familiar with the patch that adds support
>>> to Valgrind for User Mode Linux ? I'm not sure what the best approach
>>> is -- letting the kernel do its own checking like kmemcheck or extend
>>> Valgrind such that it supports UML. Anyway, the techniques applied in
>>> Valgrind may be useful for kmemcheck too, such as the algorithms used
>>> in Valgrind to compress the memory state information.

Yes, I have learned of it not so long ago, around January or so. I
wanted to stop kmemcheck development back then, but Ingo and Pekka
convinced me that it could still be useful :-)

(The link is http://bitwagon.com/valgrind+uml/index.html)

I guess the main disadvantages of using kmemcheck over valgrind-memcheck are:
- kmemcheck can only warn eagerly, whereas memcheck will wait until
the uninitialized bits are actually used. This means that kmemcheck
will report many false positives. (We have some workarounds but this
is obviously not perfect.)
- kmemcheck can only warn for dynamic memory, whereas kmemcheck I
believe will also work for local variables, static variables, etc.

It would be interesting to compare the output of kmemcheck vs. the
output of memcheck, though.

> On Sat, May 10, 2008 at 11:06 AM, Pekka Enberg <penberg@xxxxxxxxxxxxxx> wrote:
>> It's better to do it with the native kernel so you can "valgrind" all the
>> interesting driver code.
>
> That's right. This is the paper I was referring to that details how to
> minimize the memory consumption when tracking state information:
> http://www.valgrind.org/docs/shadow-memory2007.pdf

Thanks. I have actually seen the paper before, but not read all of it.
>From a quick glace, it seems that the optimizations described there
apply to the tracking of individual bits within a byte, but since we
are tracking by byte granularity (as opposed to bit granularity), it
also seems irrelevant to kmemcheck. (I am not saying that it isn't
interesting, however.)

Currently, we are using a full byte for each shadowed byte. Since we
actually only use two bits out of eight, we could save three fourths
compared to what we use today.

However, memory usage doesn't seem to be much of a problem. I actually
think it might be worth saving the CPU cycles that are needed for the
lookups/bit operations (memory is cheap, cycles aren't). How is the
speed of Valgrind+UML, does anybody know? Isn't there a problem that
Valgrind will have to emulate all the userspace programs as well?
That, I believe, would make the Valgrinded system painfully slow to
work with. I have no benchmarks or profiler results to refer to, but
kmemcheck at least boots to full userspace+X and is still quite
usable.


Vegard

--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/