Re: [PATCH] x86/sgx: fix a NULL pointer

From: Haitao Huang
Date: Tue Jul 18 2023 - 16:32:16 EST


On Tue, 18 Jul 2023 13:53:47 -0500, Dave Hansen <dave.hansen@xxxxxxxxx> wrote:

On 7/18/23 11:11, Haitao Huang wrote:
On Tue, 18 Jul 2023 09:27:49 -0500, Dave Hansen <dave.hansen@xxxxxxxxx>
wrote:

On 7/17/23 13:29, Haitao Huang wrote:
Under heavy load, the SGX EPC reclaimers (current ksgxd or future EPC
cgroup worker) may reclaim the SECS EPC page for an enclave and set
encl->secs.epc_page to NULL. But the SECS EPC page is used for EAUG in
the SGX #PF handler without checking for NULL and reloading.

Fix this by checking if SECS is loaded before EAUG and load it if it was
reclaimed.

It would be nice to see a _bit_ more theory of the bug in here.

What is an SECS page and why is it special in a reclaim context? Why is
this so hard to hit? What led you to discover this issue now? What is
EAUG?

Let me know if this clarify things.

The SECS page holds global states of an enclave, and all reclaimable
pages tracked by the SGX EPC reclaimer (ksgxd) are considered 'child'
pages of the SECS page corresponding to that enclave. The reclaimer
only reclaims the SECS page when all its children are reclaimed. That
can happen on system under high EPC pressure where multiple large
enclaves demanding much more EPC page than physically available. In a
rare case, the reclaimer may reclaim all EPC pages of an enclave and it
SECS page, setting encl->secs.epc_page to NULL, right before the #PF
handler get the chance to handle a #PF for that enclave. In that case,
if that #PF happens to require kernel to invoke the EAUG instruction to
add a new EPC page for the enclave, then a NULL pointer results as
current code does not check if encl->secs.epc_page is NULL before using it.

Better, but that's *REALLY* verbose and really imprecise. It doesn't
_require_ "high pressure". It could literally happen at very, very low
pressures over a long period of time.

I don't quite get this part. In low pressure scenario, the reclaimer never need to reclaim all children of SECs. So it would not reclaim SECS no matter how long you run?

Ignore VA pages for now. Say for a system with 10 page EPC, 2 enclaves, each needs 5 pages non-SECS so total demand would be 12 pages. The ksgxd would only need to swap out 2 pages at the most to get one enclave fully loaded with 6 pages, and the other one with 4 pages. There is no chance the ksgxd would swap any one of two SECS pages.

We would need at least one enclave A of 10 pages total to squeeze out the other B completely. For that to happen B pretty much has to be sleeping all the time so the LRU based reclaiming would hit it but not pages of A. So no chance to hit #PF on pages of B still.

So some minimal pressure is needed to ensure SECS swapped. The higher the pressure the higher the chance to hit #PF while SECS is swapped.

Please stick to the facts and
it'll actually simplify the description.

The SECS page holds global enclave metadata. It can only be
reclaimed when there are no other enclave pages remaining. At
that point, virtually nothing can be done with the enclave until
the SECS page is paged back in.

An enclave can not run nor generate page faults without without
a resident SECS page. But it is still possible for a #PF for a
non-SECS page to race with paging out the SECS page.

Hitting this bug requires triggering that race.

Thanks for the suggestion. I agree on those.

The bug is easier to reproduce with the EPC cgroup implementation when a
low EPC limit is set for a group of enclave hosting processes. Without
the EPC cgroup it's hard to trigger the reclaimer to reclaim all child
pages of an SECS page. And it'd also require a machine configured with
large RAM relative to EPC so no OOM killer triggered before this happens.

Isn't this the _normal_ case? EPC is relatively tiny compared to RAM
normally.

I don't know what is perceived as normal here. But for this to happen, the swapping space should be able to hold content much bigger than EPC, if my reasoning above for high pressure required is correct. I tried 70 concurrent sgx selftests instances on a server with 4G EPC, 512G RAM and no disk swapping, and encountered OOM first. Those selftests instance each demand about 8G EPC.

Thanks
Haitao