Re: [PATCH] LDT improvements
From: Ingo Molnar
Date: Fri Dec 08 2017 - 04:46:04 EST
* Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
> On Fri, 8 Dec 2017, Ingo Molnar wrote:
> > * Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
> > > I don't love mucking with user address space. I'm also quite nervous about
> > > putting it in our near anything that could pass an access_ok check, since we're
> > > totally screwed if the bad guys can figure out how to write to it.
> >
> > Hm, robustness of the LDT address wrt. access_ok() is a valid concern.
> >
> > Can we have vmas with high addresses, in the vmalloc space for example?
> > IIRC the GPU code has precedents in that area.
> >
> > Since this is x86-64, limitation of the vmalloc() space is not an issue.
> >
> > I like Thomas's solution:
> >
> > - have the LDT in a regular mmap space vma (hence per process ASLR randomized),
> > but with the system bit set.
> >
> > - That would be an advantage even for non-PTI kernels, because mmap() is probably
> > more randomized than kmalloc().
>
> Randomization is pointless as long as you can get the LDT address in user
> space, i.e. w/o UMIP.
But with UMIP unprivileged user-space won't be able to get the linear address of
the LDT. Now it's written out in /proc/self/maps.
> > - It would also be a cleaner approach all around, and would avoid the fixmap
> > complications and the scheduler muckery.
>
> The error code of such an access is always 0x03. So I added a special
> handler, which checks whether the address is in the LDT map range and
> verifies that the access bit in the descriptor is 0. If that's the case it
> sets it and returns. If not, the thing dies. That works.
Are SMP races possible? For example two threads both triggering the accessed bit
fault, but only one of them succeeding in setting it. The other thread should not
die in this case, right?
Thanks,
Ingo