On Tue, Aug 30, 2016 at 12:37 PM, Chris Metcalf <cmetcalf@xxxxxxxxxxxx> wrote:
On 8/30/2016 2:43 PM, Andy Lutomirski wrote:Just use the context tracking entry hook. It's 100% reliable. The
What if we did it the other way around: set a percpu flag saying
"going quiescent; disallow new deferred work", then finish all
existing work and return to userspace. Then, on the next entry, clear
that flag. With the flag set, vmstat would just flush anything that
it accumulates immediately, nothing would be added to the LRU list,
etc.
This is an interesting idea!
However, there are a number of implementation ideas that make me
worry that it might be a trickier approach overall.
First, "on the next entry" hides a world of hurt in four simple words.
Some platforms (arm64 and tile, that I'm familiar with) have a common
chunk of code that always runs on every entry to the kernel. It would
not be too hard to poke at the assembly and make those platforms
always run some task-isolation specific code on entry. But x86 scares
me - there seem to be a whole lot of ways to get into the kernel, and
I'm not convinced there is a lot of shared macrology or whatever that
would make it straightforward to intercept all of them.
relevant x86 function is enter_from_user_mode(), but I would just hook
into user_exit() in the common code. (This code is had better be
reliable, because context tracking depends on it, and, if context
tracking doesn't work on a given arch, then isolation isn't going to
work regardless.
But it does seem like we are adding noticeable maintenance cost onMy intuition is that it's the other way around. For the mainline
the mainline kernel to support task isolation by doing this. My guess
is that it is easier to support the kind of "are you clean?" / "get clean"
APIs for subsystems, rather than weaving a whole set of "stay clean"
mechanism into each subsystem.
kernel, having a nice clean well-integrated implementation is nicer
than having a bolted-on implementation that interacts in potentially
complicated ways. Once quiescence support is in mainline, the size of
the diff or the degree to which it's scattered around is irrelevant
because it's not a diff any more.
So to pop up a level, what is your actual concern about the existingMy concern is that it's not obvious to readers of the code that the
"do it in a loop" model? The macrology currently in use means there
is zero cost if you don't configure TASK_ISOLATION, and the software
maintenance cost seems low since the idioms used for task isolation
in the loop are generally familiar to people reading that code.
loop ever terminates. It really ought to, but it's doing something
very odd. Normally we can loop because we get scheduled out, but
actually blocking in the return-to-userspace path, especially blocking
on a condition that doesn't have a wakeup associated with it, is odd.
This kind of waiting out the dyntick scares me. Why is there ever aAlso, this cond_resched stuff doesn't worry me too much at aWe aren't currently planning to enforce things in the scheduler, so if
fundamental level -- if we're really going quiescent, shouldn't we be
able to arrange that there are no other schedulable tasks on the CPU
in question?
the application affinitizes another task on top of an existing task
isolation task, by default the task isolation task just dies. (Unless
it's using NOSIG mode, in which case it just ends up stuck in the
kernel trying to wait out the dyntick until you either kill it, or
re-affinitize the offending task.) But I'm reluctant to guarantee
every possible way that you might (perhaps briefly) have some
schedulable task, and the current approach seems pretty robust if that
sort of thing happens.
dyntick that you're waiting out? If quiescence is to be a supported
mainline feature, shouldn't the scheduler be integrated well enough
with it that you don't need to wait like this?
Have you confirmed that this works correctly wrt PTRACE_SYSCALL? It
should result in an even number of events (like raise(2) or an async
signal) and that should have a test case.