On Tue, Aug 17, 2021 at 05:00:55PM +0200, David Hildenbrand wrote:
Not sure if already discussed, but what about making sure that free pages
are not a mixture (partially unaccepted, partially accepted).
You'd have to expose the pages in that granularity to the buddy
(__free_pages_core), indicating the state. You'd have to reject merging
pages of differing acceptance state.
Accepting a page would then be handled outside of the zone lock, completely
controlled by the state.
So a page in the buddy would either be completely accepted or completely
unaccepted, signaled e.g., by PageOffline().
Consequently, when allocating a 4KiB page, you'd split an unaccepted 2MiB
page into separate unaccepted pages. You'd grab one of the unaccepted 4KiB
pages and accept it before initializing it and handing it out.
Yes, that is the alternative to over-accepting memory on allocation. But
the problem here is that accepting/validating memory is an expensive
operation which also requires a hypercall. The hypercalls on SNP and TDX
can accept/validate multiple pages in one call. So the recommendation is
to accept memory in bigger chunks, like the 2MB that have been proposed.
Only accepting memory in allocation size might be too slow, as there is
a lot of code doing order-0 allocations. I think this approach will also
be more intrusive to the page alloctor, as it needs more changes on the
free path to check for acceptance states before pages can be merged.