Re: THP-enabled filesystem vs. FALLOC_FL_PUNCH_HOLE

From: Kirill A. Shutemov
Date: Sat Mar 05 2016 - 19:30:44 EST


On Sun, Mar 06, 2016 at 09:38:11AM +1100, Dave Chinner wrote:
> On Sat, Mar 05, 2016 at 02:24:12AM +0300, Kirill A. Shutemov wrote:
> > On Sat, Mar 05, 2016 at 10:05:48AM +1100, Dave Chinner wrote:
> > > On Fri, Mar 04, 2016 at 11:38:47AM -0800, Hugh Dickins wrote:
> > > > On Fri, 4 Mar 2016, Dave Hansen wrote:
> > > > > On 03/04/2016 03:26 AM, Kirill A. Shutemov wrote:
> > > > > > On Thu, Mar 03, 2016 at 07:51:50PM +0300, Kirill A. Shutemov wrote:
> > > > > >> Truncate and punch hole that only cover part of THP range is implemented
> > > > > >> by zero out this part of THP.
> > > > > >>
> > > > > >> This have visible effect on fallocate(FALLOC_FL_PUNCH_HOLE) behaviour.
> > > > > >> As we don't really create hole in this case, lseek(SEEK_HOLE) may have
> > > > > >> inconsistent results depending what pages happened to be allocated.
> > > > > >> Not sure if it should be considered ABI break or not.
> > > > > >
> > > > > > Looks like this shouldn't be a problem. man 2 fallocate:
> > > > > >
> > > > > > Within the specified range, partial filesystem blocks are zeroed,
> > > > > > and whole filesystem blocks are removed from the file. After a
> > > > > > successful call, subsequent reads from this range will return
> > > > > > zeroes.
> > > > > >
> > > > > > It means we effectively have 2M filesystem block size.
> > > > >
> > > > > The question is still whether this will case problems for apps.
> > > > >
> > > > > Isn't 2MB a quote unusual block size? Wouldn't some files on a tmpfs
> > > > > filesystem act like they have a 2M blocksize and others like they have
> > > > > 4k? Would that confuse apps?
> > > >
> > > > At risk of addressing the tip of an iceberg, before diving down to
> > > > scope out the rest of the iceberg...
> > > ....
> > >
> > > > (Though in the case of my huge tmpfs, it's the reverse: the small hole
> > > > punch splits the hugepage; but it's natural that Kirill's way would try
> > > > to hold on to its compound pages for longer than I do, and that's fine
> > > > so long as it's all consistent.)
> > > ....
> > > > Ah, but suppose someone holepunches out most of each 2M page: they would
> > > > expect the memcg not to be charged for those holes (just as when they
> > > > munmap most of an anonymous THP) - that does suggest splitting is needed.
> > >
> > > I think filesystems will expect splitting to happen. They call
> > > truncate_pagecache_range() on the region that the hole is being
> > > punched out of, and they expect page cache pages over this range to
> > > be unmapped, invalidated and then removed from the mapping tree as a
> > > result. Also, most filesystems think the page cache only contains
> > > PAGE_CACHE_SIZE mappings, so they are completely unaware of the
> > > limitations THP might have when it comes to invalidation.
> > >
> > > IOWs, if this range is not aligned to huge page boundaries, then it
> > > implies the huge page is either split into PAGE_SIZE mappings and
> > > then the range is invalidated as expected, or it is completely
> > > invalidated and then refaulted on future accesses which determine if
> > > THP or normal pages are used for the page being faulted....
> >
> > The filesystem in question is tmpfs and complete invalidation is not
> > always an option.
>
> Then your two options are: splitting the page and rerunning the hole
> punch, or simply zeroing the sections of the THP rather than trying
> to punch out the backing store.

The second option is implemented at the moment as splitting can fail.

> > For other filesystems it also can be unavailable
> > immediately if the page is dirty (the dirty flag is tracked on per-THP
> > basis at the moment).
>
> Filesystems with persistent storage flush the range being punched
> first to ensure that partial blocks are correctly written before we
> start freeing the backing store. This is needed on XFS to ensure
> hole punch plays nicely with delayed allocation and other extent
> based operations. Hence we know that we have clean pages over the
> hole we are about to punch and so there is no reason the
> invalidation should *ever* fail.

Okay. It means we have other option to consider on THP-enabling for a
filesystem with persistent storage.

> tmpfs is a special snowflake when it comes to these fallocate based
> filesystem layout manipulation functions - it does not have
> persistent storage, so you have to do things very differently to
> ensure that data is not lost.
>
> > Would it be acceptable for fallocate(FALLOC_FL_PUNCH_HOLE) to return
> > -EBUSY (or other errno on your choice), if we cannot split the page
> > right away?
>
> Which means THP are not transparent any more. What does an
> application do when it gets an EBUSY, anyway?

I guess it's reasonable to expect from an application to handle EOPNOTSUPP
as FALLOC_FL_PUNCH_HOLE is not supported by some filesystems.
Although, non-consistent result from the same fd can be confusing.

For now, I would stick with clearing part of THP on partial truncate or
punch hole.

We can consider some other options, like deferred split, later.

> And it's not just hole punching that has this problem. Direct IO is
> going to have the same issue with invalidation of the mapped ranges
> over the IO being done. XFS already WARNs when page cache
> invalidation fails with EBUSY in direct IO, because that is
> indicative of an application with a potential data corruption vector
> and there's nothing we can do in the kernel code to prevent it.

My current understanding is that for filesystems with persistent storage,
in order to make THP any useful, we would need to implement writeback
without splitting the huge page.
At the moment, I have no idea how hard it would be..

But it means we shouldn't be required to split a page in order to
invalidate page cache. Therefore no risk of split failure.

> I think the same issues also exist with DAX using huge (and giant)
> pages. Hence it seems like we need to think about these interactions
> carefully, because they will no longer are isolated to tmpfs and
> THP...
>
> > > Just to complicate things, keep in mind that some filesystems may
> > > have a PAGE_SIZE block size, but can be convinced to only
> > > allocate/punch/truncate/etc extents on larger alignments on a
> > > per-inode basis. IOWs, THP vs hole punch behaviour is not actually
> > > a filesystem type specific behaviour - it's per-inode specific...
> >
> > There is also similar question about THP vs. i_size vs. SIGBUS.
> >
> > For small pages an application will not get SIGBUS on mmap()ed file, until
> > it wouldn't try to access beyond round_up(i_size, PAGE_CACHE_SIZE) - 1.
> >
> > For THP it would be round_up(i_size, HPAGE_PMD_SIZE) - 1.
> >
> > Is it a problem?
>
> No idea. I'm guessing that there may be significant stale data
> exposure issues here as filesystems do not guarantee that blocks
> completely beyond EOF contain zeros.

I don't know about blocks, but I think we can provide this guarantee on
page cache level, as we do for small pages.

--
Kirill A. Shutemov