Re: use-after-free in __perf_install_in_context
From: Dmitry Vyukov
Date: Tue Dec 08 2015 - 12:56:49 EST
On Tue, Dec 8, 2015 at 6:54 PM, Alexei Starovoitov
<alexei.starovoitov@xxxxxxxxx> wrote:
> On Tue, Dec 08, 2015 at 05:12:04PM +0100, Dmitry Vyukov wrote:
>> On Tue, Dec 8, 2015 at 4:24 AM, Alexei Starovoitov
>> <alexei.starovoitov@xxxxxxxxx> wrote:
>> > On Mon, Dec 07, 2015 at 05:09:21PM +0100, Dmitry Vyukov wrote:
>> >> > So it would be _awesome_ if we could somehow extend this callchain to
>> >> > include the site that calls call_rcu().
>> >>
>> >> We have a patch for KASAN in works that adds so-called stack depot
>> >> which allows to map a stack trace onto uint32 id. Then we can plumb
>> >
>> > I was hacking something similar to categorize stack traces with u32 id.
>> > How are you planning to limit the number of such stack traces ?
>> > and what is the interface for user space to get stack trace from an id?
>>
>>
>> We don't limit number of stack traces. Kernel does not seem to use
>> data-driven recursion extensively, so there is limited number of
>> stacks. Though, probably we will need to strip non-interrupt part for
>> interrupt stacks, otherwise that can produce unbounded number of
>> different stacks.
>> There is no interface for user-space, it is used only inside of kernel
>> to save stacks for memory blocks (rcu callbacks, thread pool items in
>> the future).
>> The design is based on what we successfully and extensively use in
>> user-space sanitizers for years. Current code is here:
>> https://github.com/ramosian-glider/kasan/commit/fb0eefd212366401ed5ad244233ef379a27bfb46
>
> why did you pick approach to never free accumulated stacks?
> That limits usability a lot, since once kasan starts using it only
> reboot will free the memory. ouch.
> what worked for user space doesn't work for kernel.
Freeing and reusing will slow down and complicate code significantly.
And it is not yet proved to be necessary.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/