Re: [RFC PATCH 0/9] livepatch: consistency model

From: Jiri Kosina
Date: Mon Feb 09 2015 - 18:15:39 EST


On Mon, 9 Feb 2015, Josh Poimboeuf wrote:

> This patch set implements a livepatch consistency model, targeted for 3.21.
> Now that we have a solid livepatch code base, this is the biggest remaining
> missing piece.

Hi Josh,

first, thanks a lot for putting this together. From a cursory look it
certainly seems to be a very solid base for future steps.

I am afraid I won't get to proper review before merge window concludes
though. But after that it gets moved the top of my TODO list.

> This code stems from the design proposal made by Vojtech [1] in November. It
> makes live patching safer in general. Specifically, it allows you to apply
> patches which change function prototypes. It also lays the groundwork for
> future code changes which will enable data and data semantic changes.
>
> It's basically a hybrid of kpatch and kGraft, combining kpatch's backtrace
> checking with kGraft's per-task consistency. When patching, tasks are
> carefully transitioned from the old universe to the new universe. A task can
> only be switched to the new universe if it's not using a function that is to be
> patched or unpatched. After all tasks have moved to the new universe, the
> patching process is complete.
>
> How it transitions various tasks to the new universe:
>
> - The stacks of all sleeping tasks are checked. Each task that is not sleeping
> on a to-be-patched function is switched.
>
> - Other user tasks are handled by do_notify_resume() (see patch 9/9). If a
> task is I/O bound, it switches universes when returning from a system call.
> If it's CPU bound, it switches when returning from an interrupt.

Just one rather minor comment to this -- we can actually switch CPU-bound
processess "immediately" when we notice they are running in userspace
(assuming that we are also migrating them when they are entering the
kernel as well ... which doesn't seem to be implemented by this patchset,
but that could be easily added at low cost).

Relying on IRQs is problematic, because you can have CPU completely
isolated from both scheduler and IRQs (that's what realtime folks are
doing routinely), so you don't see IRQ on that particular CPU for ages.

The way how do detect whether given CPU is running in userspace (without
interfering with it too much, like, say, sending costly IPI) is rather
tricky though. On kernels with CONFIG_CONTEXT_TRACKING we could make use
of that feature, but my gut feeling is that most people keep that
disabled.

Another alternative is what we are doing in kgraft with
kgr_needs_lazy_migration(), but admittedly that's very far from being
pretty.

--
Jiri Kosina
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/