Re: [PATCH] x86/fpu: Introduce the x86_task_fpu() helper method

From: Brian Gerst
Date: Thu Jun 06 2024 - 11:36:21 EST


On Thu, Jun 6, 2024 at 5:06 AM Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
>
> * Brian Gerst <brgerst@xxxxxxxxx> wrote:
>
> > > 17 files changed, 104 insertions(+), 107 deletions(-)
> >
> > This series would be better if you added the x86_task_fpu() helper in
> > an initial patch without any other changes. That would make the
> > actual changes more visible with less code churn.
>
> Makes sense - I've split out the patch below and adjusted the rest of the
> series. Is this what you had in mind?
>
> Note that I also robustified the macro a bit:
>
> -# define x86_task_fpu(task) ((struct fpu *)((void *)task + sizeof(*task)))
> +# define x86_task_fpu(task) ((struct fpu *)((void *)(task) + sizeof(*(task))))
>
> Thanks,
>
> Ingo
>
> ========================>
> From: Ingo Molnar <mingo@xxxxxxxxxx>
> Date: Thu, 6 Jun 2024 11:01:14 +0200
> Subject: [PATCH] x86/fpu: Introduce the x86_task_fpu() helper method
>
> The per-task FPU context/save area is allocated right
> next to task_struct() - introduce the x86_task_fpu()
> helper that calculates this explicitly from the
> task pointer.
>
> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
> ---
> arch/x86/include/asm/processor.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index 920b0beebd11..fb6f030f0692 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -507,6 +507,8 @@ struct thread_struct {
> struct fpu *fpu;
> };
>
> +#define x86_task_fpu(task) ((struct fpu *)((void *)(task) + sizeof(*(task))))
> +
> /*
> * X86 doesn't need any embedded-FPU-struct quirks:
> */

Since this should be the first patch in the series, It would be:

#define #define x86_task_fpu(task) (&(task)->thread.fpu)

along with converting the existing accesses to task->thread.fpu in one
patch with no other functional changes. Then you could change how the
fpu struct is allocated without touching every access site again.


Brian Gerst