Re: [PATCH V3 19/30] x86/sgx: Free up EPC pages directly to support large page ranges
From: Jarkko Sakkinen
Date: Tue Apr 05 2022 - 03:10:12 EST
On Mon, Apr 04, 2022 at 09:49:27AM -0700, Reinette Chatre wrote:
> The page reclaimer ensures availability of EPC pages across all
> enclaves. In support of this it runs independently from the
> individual enclaves in order to take locks from the different
> enclaves as it writes pages to swap.
>
> When needing to load a page from swap an EPC page needs to be
> available for its contents to be loaded into. Loading an existing
> enclave page from swap does not reclaim EPC pages directly if
> none are available, instead the reclaimer is woken when the
> available EPC pages are found to be below a watermark.
>
> When iterating over a large number of pages in an oversubscribed
> environment there is a race between the reclaimer woken up and
> EPC pages reclaimed fast enough for the page operations to proceed.
>
> Ensure there are EPC pages available before attempting to load
> a page that may potentially be pulled from swap into an available
> EPC page.
>
> Signed-off-by: Reinette Chatre <reinette.chatre@xxxxxxxxx>
> ---
> No changes since V2
>
> Changes since v1:
> - Reword commit message.
>
> arch/x86/kernel/cpu/sgx/ioctl.c | 6 ++++++
> arch/x86/kernel/cpu/sgx/main.c | 6 ++++++
> arch/x86/kernel/cpu/sgx/sgx.h | 1 +
> 3 files changed, 13 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> index 515e1961cc02..f88bc1236276 100644
> --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> @@ -777,6 +777,8 @@ sgx_enclave_restrict_permissions(struct sgx_encl *encl,
> for (c = 0 ; c < modp->length; c += PAGE_SIZE) {
> addr = encl->base + modp->offset + c;
>
> + sgx_direct_reclaim();
> +
> mutex_lock(&encl->lock);
>
> entry = sgx_encl_load_page(encl, addr);
> @@ -934,6 +936,8 @@ static long sgx_enclave_modify_type(struct sgx_encl *encl,
> for (c = 0 ; c < modt->length; c += PAGE_SIZE) {
> addr = encl->base + modt->offset + c;
>
> + sgx_direct_reclaim();
> +
> mutex_lock(&encl->lock);
>
> entry = sgx_encl_load_page(encl, addr);
> @@ -1129,6 +1133,8 @@ static long sgx_encl_remove_pages(struct sgx_encl *encl,
> for (c = 0 ; c < params->length; c += PAGE_SIZE) {
> addr = encl->base + params->offset + c;
>
> + sgx_direct_reclaim();
> +
> mutex_lock(&encl->lock);
>
> entry = sgx_encl_load_page(encl, addr);
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 6e2cb7564080..545da16bb3ea 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -370,6 +370,12 @@ static bool sgx_should_reclaim(unsigned long watermark)
> !list_empty(&sgx_active_page_list);
> }
>
> +void sgx_direct_reclaim(void)
> +{
> + if (sgx_should_reclaim(SGX_NR_LOW_PAGES))
> + sgx_reclaim_pages();
> +}
Please, instead open code this to both locations - not enough redundancy
to be worth of new function. Causes only unnecessary cross-referencing
when maintaining. Otherwise, I agree with the idea.
BR, Jarkko