Re: Simplify load_unaligned_zeropad() (was Re: [GIT PULL] Ceph updates for 5.20-rc1)
From: Kirill A. Shutemov
Date: Mon Aug 15 2022 - 00:09:41 EST
On Sun, Aug 14, 2022 at 08:43:09PM -0700, Linus Torvalds wrote:
> On Sun, Aug 14, 2022 at 3:59 PM Linus Torvalds
> <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > If TDX has problems with it, then TDX needs to be fixed. And it's
> > simple enough - just make sure you have a guard page between any
> > kernel RAM mapping and whatever odd crazy page.
>
> .. thinking about this more, I thought we had already done that in the
> memory initialization code - ie make sure that we always leave a gap
> between any page we mark and any IO memory after it.
ioremap()ed memory should not be a problem as it is not RAM from kernel
PoV and it is separated from memory allocated by buddy allocator.
But DMA buffer can be allocated from general pool of memory. We share it
TDX for I/O too. It does not cause problems as long as direct mapping is
adjusted to map it as shared. #VE handler is already aware of
load_unaligned_zeropad().
So far, so good.
But if somebody would try to be clever -- allocate memory and vmap() as
shared (with proper VMM notification), but leave direct mapping intact --
we have a problem. load_unaligned_zeropad() can step onto private mapping
of the shared memory in direct mapping and crash whole TD guest.
The worst part is that for somebody who is not aware about
load_unaligned_zeropad(), the vmap() trick is totally reasonable approach:
it helps to avoid direct mapping fragmentation. We considered the trick
for one of TDX-specific drivers.
--
Kiryl Shutsemau / Kirill A. Shutemov