Re: [RFC 02/14] x86/apic: Initialize Secure AVIC APIC backing page
From: Dave Hansen
Date: Wed Oct 09 2024 - 13:03:39 EST
On 10/9/24 09:31, Neeraj Upadhyay wrote:
>> Second, this looks to be allocating a potentially large physically
>> contiguous chunk of memory, then handing it out 4k at a time. The loop is:
>>
>> buf = alloc(NR_CPUS * PAGE_SIZE);
>> for (i = 0; i < NR_CPUS; i++)
>> foo[i] = buf + i * PAGE_SIZE;
>>
>> but could be:
>>
>> for (i = 0; i < NR_CPUS; i++)
>> foo[i] = alloc(PAGE_SIZE);
>>
>> right?
>
> Single contiguous allocation is done here to avoid TLB impact due to backing page
> accesses (e.g. sending ipi requires writing to target CPU's backing page).
> I can change it to allocation in chunks of size 2M instead of one big allocation.
> Is that fine? Also, as described in commit message, reserving entire 2M chunk
> for backing pages also prevents splitting of NPT entries into individual 4K entries.
> This can happen if part of a 2M page is not allocated for backing pages by guest
> and page state change (from private to shared) is done for that part.
Ick.
First, this needs to be thoroughly commented, not in the changelogs.
Second, this is premature optimization at its finest. Just imagine if
_every_ site that needed 16k or 32k of shared memory decided to allocate
a 2M chunk for this _and_ used it sparsely. What's the average number
of vCPUs in a guest. 4? 8?
The absolute minimum that we can do here is some stupid infrastructure
that you call for allocating shared pages, or for things that _will_ be
converted to shared so they get packed.
But hacking uncommented 2M allocations into every site seems like
insanity to me.
IMNHO, you can either invest the time to put the infrastructure in place
and get 2M pages, or you can live with the suboptimal performance of 4k.