[PATCH v2 0/4] arch/x86: Optionally flush L1D on context switch

From: Balbir Singh
Date: Sun Apr 05 2020 - 23:20:13 EST


Provide a mechanisn to flush the L1D cache on context switch. The goal
is to allow tasks that are paranoid due to the recent snoop assisted data
sampling vulnerabilites, to flush their L1D on being switched out.
This protects their data from being snooped or leaked via side channels
after the task has context switched out.

The core of the patches is patch 3, the first two refactor the code so
that common bits can be reused.

Changelog v2:
- Fix a miss of mutex_unlock (caught by Borislav Petkov <bp@xxxxxxxxx>)
- Add documentation about the changes (Josh Poimboeuf
<jpoimboe@xxxxxxxxxx>)

Changelog:
- Refactor the code and reuse cond_ibpb() - code bits provided by tglx
- Merge mm state tracking for ibpb and l1d flush
- Rename TIF_L1D_FLUSH to TIF_SPEC_FLUSH_L1D

Changelog RFC:
- Reuse existing code for allocation and flush
- Simplify the goto logic in the actual l1d_flush function
- Optimize the code path with jump labels/static functions

The previous version of this patch posted at:

https://lore.kernel.org/lkml/20200402062401.29856-1-sblbir@xxxxxxxxxx/

Balbir Singh (4):
arch/x86/kvm: Refactor l1d flush lifecycle management
arch/x86: Refactor tlbflush and l1d flush
arch/x86: Optionally flush L1D on context switch
arch/x86: Add L1D flushing Documentation

Documentation/admin-guide/hw-vuln/index.rst | 1 +
.../admin-guide/hw-vuln/l1d_flush.rst | 40 ++++++++
arch/x86/include/asm/cacheflush.h | 6 ++
arch/x86/include/asm/thread_info.h | 6 +-
arch/x86/include/asm/tlbflush.h | 2 +-
arch/x86/include/uapi/asm/prctl.h | 3 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/l1d_flush.c | 85 +++++++++++++++++
arch/x86/kernel/process_64.c | 10 +-
arch/x86/kvm/vmx/vmx.c | 56 ++---------
arch/x86/mm/tlb.c | 94 ++++++++++++++-----
11 files changed, 232 insertions(+), 72 deletions(-)
create mode 100644 Documentation/admin-guide/hw-vuln/l1d_flush.rst
create mode 100644 arch/x86/kernel/l1d_flush.c

--
2.17.1