Re: [RFC V1 5/5] x86: CVMs: Ensure that memory conversions happen at 2M alignment

From: Jeremi Piotrowski
Date: Thu Feb 01 2024 - 07:02:54 EST


On 01/02/2024 04:46, Vishal Annapurve wrote:
> On Wed, Jan 31, 2024 at 10:03 PM Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
>>
>> On 1/11/24 21:52, Vishal Annapurve wrote:
>>> @@ -2133,8 +2133,10 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
>>> int ret;
>>>
>>> /* Should not be working on unaligned addresses */
>>> - if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr))
>>> - addr &= PAGE_MASK;
>>> + if (WARN_ONCE(addr & ~HPAGE_MASK, "misaligned address: %#lx\n", addr)
>>> + || WARN_ONCE((numpages << PAGE_SHIFT) & ~HPAGE_MASK,
>>> + "misaligned numpages: %#lx\n", numpages))
>>> + return -EINVAL;
>>
>> This series is talking about swiotlb and DMA, then this applies a
>> restriction to what I *thought* was a much more generic function:
>> __set_memory_enc_pgtable(). What prevents this function from getting
>> used on 4k mappings?
>>
>>
>
> The end goal here is to limit the conversion granularity to hugepage
> sizes. SWIOTLB allocations are the major source of unaligned
> allocations(and so the conversions) that need to be fixed before
> achieving this goal.
>
> This change will ensure that conversion fails for unaligned ranges, as
> I don't foresee the need for 4K aligned conversions apart from DMA
> allocations.

Hi Vishal,

This assumption is wrong. set_memory_decrypted is called from various
parts of the kernel: kexec, sev-guest, kvmclock, hyperv code. These conversions
are for non-DMA allocations that need to be done at 4KB granularity
because the data structures in question are page sized.

Thanks,
Jeremi