Re: [PATCH] mm/mincore: allow for making sys_mincore() privileged
From: Andy Lutomirski
Date: Thu Jan 10 2019 - 00:28:26 EST
On Wed, Jan 9, 2019 at 5:18 PM Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Wed, Jan 9, 2019 at 4:44 PM Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> >
> > I wouldn't look at ext4 as an example of a reliable, problem free
> > direct IO implementation because, historically speaking, it's been a
> > series of nasty hacks (*cough* mount -o dioread_nolock *cough*) and
> > been far worse than XFS from data integrity, performance and
> > reliability perspectives.
>
> That's some big words from somebody who just admitted to much worse hacks.
>
> Seriously. XFS is buggy in this regard, ext4 apparently isn't.
>
> Thinking that it's better to just invalidate the cache for direct IO
> reads is all kinds of odd.
>
This whole discussion seems to have gone a little bit off the rails...
Linus, I think I agree with Dave's overall sentiment, though, and I
think you should consider reverting your patch. Here's why. The
basic idea behind the attack is that the authors found efficient ways
to do two things: evict a page from page cache and detect, *without
filling the cache*, whether a page is cached. The combination lets an
attacker efficiently tell when another process reads a page. We need
to keep in mind that this attack is a sophisticated attack, and anyone
using it won't have any problem using a nontrivial way to detect
whether a page is in page cache.
So, unless we're going to try for real to make it hard to tell whether
a page is cached without causing that page to become cached, it's not
worth playing whack-a-mole. And, right now, mincore is whacking a
mole. RWF_NOWAIT appears to do essentially the same thing at very
little cost. I haven't really dug in, but I assume that various
prefaulting tricks combined with various pagetable probing tricks can
do similar things, but that's at least a *lot* more complicated.
So unless we're going to lock down RWF_NOWAIT as well, I see no reason
to lock down mincore(). Direct IO is a red herring -- O_DIRECT is
destructive enough that it seems likely to make the attacks a lot less
efficient.
--- begin digression ---
Since direct IO has been brought up, I have a question. I've wondered
for years why direct IO works the way it does. If I were implementing
it from scratch, my first inclination would be to use the page cache
instead of fighting it. To do a single-page direct read, I would look
that page up in the page cache (i.e. i_pages these days). If the page
is there, I would do a normal buffered read. If the page is not
there, I would insert a record into i_pages indicating that direct IO
is in progress and then I would do the IO into the destination page.
If any other read, direct or otherwise, sees a record saying "under
direct IO", it would wait.
To do a single-page direct write, I would look it up in i_pages. If
it's there, I would do a buffered write followed by a sync (because
applications expect a sync). If it's not there, I would again add a
record saying "under direct IO" and do the IO.
The idea is that this works as much like buffered IO as possible,
except that the pages backing the IO aren't normal sharable page cache
pages.
The multi-page case would be just an optimization on top of the
single-page case. The idea would be to somehow mark i_pages with
entire extents under direct IO. It's a radix tree -- this can, at
least in theory, be done efficiently. As long as all direct IO
operations run in increasing order of offset, there shouldn't be lock
ordering problems.
Other than history and possibly performance, is there any reason that
direct IO doesn't work this way?
P.S. What, if anything, prevents direct writes from causing trouble
when the underlying FS or backing store needs stable pages?
Similarly, what, if anything, prevents direct reads from temporarily
exposing unintended data to user code if the fs or underlying device
transforms the data during the read process (e.g. by decrypting
something)?
--- end digression ---