Re: [PATCH v6] kernel: add kcov code coverage

From: Alexander Shishkin
Date: Thu Jan 28 2016 - 07:31:12 EST


Dmitry Vyukov <dvyukov@xxxxxxxxxx> writes:

> + fd = open("/sys/kernel/debug/kcov", O_RDWR);
> + if (fd == -1)
> + perror("open"), exit(1);
> + /* Setup trace mode and trace size. */
> + if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
> + perror("ioctl"), exit(1);
> + /* Mmap buffer shared between kernel- and user-space. */
> + cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
> + PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> + if ((void*)cover == MAP_FAILED)
> + perror("mmap"), exit(1);
> + /* Enable coverage collection on the current thread. */
> + if (ioctl(fd, KCOV_ENABLE, 0))
> + perror("ioctl"), exit(1);
> + /* Reset coverage from the tail of the ioctl() call. */
> + __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
> + /* That's the target syscal call. */
> + read(-1, NULL, 0);
> + /* Read number of PCs collected. */
> + n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
> + for (i = 0; i < n; i++)
> + printf("0x%lx\n", cover[i + 1]);

Kirill is right, this does look a lot like a candidate for a perf
pmu. Most of the legwork that you do in this patch is already taken care
of by perf afaict: enabling/disabling, context tracking, ring buffer for
exporting data to userspace. You'll also get other things like privilege
separation for free.

Moreover, this can already be achieved by means of hardware assisted
instruction tracing such as BTS or PT on Intel cpus (BTS will literally
output instruction pointer addresses into a ring buffer). ARM Coresight
ETM/PTM support is also on its way. That's not to say that this work is
not useful (and it's gotta be, or who will be the one to debug the
debugger), but more to make a case for a perf-based implementation.

Regards,
--
Alex