Re: [PATCH 1/3] coredump: flush the fpu exit state for propermulti-threaded core dump
From: Oleg Nesterov
Date: Thu May 10 2012 - 12:56:47 EST
On 05/09, Suresh Siddha wrote:
> On Wed, 2012-05-09 at 23:05 +0200, Oleg Nesterov wrote:
> > On 05/08, Suresh Siddha wrote:
> > >
> > > --- a/kernel/exit.c
> > > +++ b/kernel/exit.c
> > > @@ -656,6 +656,11 @@ static void exit_mm(struct task_struct * tsk)
> > > struct core_thread self;
> > > up_read(&mm->mmap_sem);
> > >
> > > + /*
> > > + * Flush the live extended register state to memory.
> > > + */
> > > + prepare_to_copy(tsk);
> > This doesn't look very nice imho, but I guess you understand this...
> > Perhaps we need an arch-dependent helper which saves the FPU regs
> > if needed.
> > I can be easily wrong, but I did the quick grep and I am not sure
> > we can rely on prepare_to_copy(). For example, it is a nop in
> > arch/sh/include/asm/processor_64.h. But at the same time it has
> > save_fpu().
> > OTOH, I am not sure it is safe to use prepare_to_copy() in exit_mm(),
> > at least in theory. God knows what it can do...
> There is an explicit schedule() just few lines below. And the schedule()
> also will do the same thing. The thing is we want the user-specific
> extended registers to be flushed to memory (used also in the fork path)
> before we notify the core dumping thread that we reached the serializing
> point, for the dumping thread to continue the dump process.
My point was, there is no any guarantee prepare_to_copy() does the flush.
An architecture can do this in copy_thread() or arch_dup_task_struct(),
for example. In fact I do not understand why x86 doesn't do this.
prepare_to_copy() doesn't have any documented semantics, it looks "strange"
But let me repeat, I do not see a better solution for now.
may be we can add wait_task_inactive() in fill_thread_core_info() though,
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/