On Tue, Jul 09, 2024 at 09:28:48AM GMT, Ryan Roberts wrote:
On 07/07/2024 17:39, Daniel Gomez wrote:
On Fri, Jul 05, 2024 at 10:59:02AM GMT, David Hildenbrand wrote:
On 05.07.24 10:45, Ryan Roberts wrote:
On 05/07/2024 06:47, Baolin Wang wrote:
On 2024/7/5 03:49, Matthew Wilcox wrote:
On Thu, Jul 04, 2024 at 09:19:10PM +0200, David Hildenbrand wrote:
On 04.07.24 21:03, David Hildenbrand wrote:
shmem has two uses:
- MAP_ANONYMOUS | MAP_SHARED (this patch set)
- tmpfs
For the second use case we don't want controls *at all*, we want the
same heiristics used for all other filesystems to apply to tmpfs.
As discussed in the MM meeting, Hugh had a different opinion on that.
FWIW, I just recalled that I wrote a quick summary:
https://lkml.kernel.org/r/f1783ff0-65bd-4b2b-8952-52b6822a0835@xxxxxxxxxx
I believe the meetings are recorded as well, but never looked at recordings.
That's not what I understood Hugh to mean. To me, it seemed that Hugh
was expressing an opinion on using shmem as shmem, not as using it as
tmpfs.
If I misunderstood Hugh, well, I still disagree. We should not have
separate controls for this. tmpfs is just not that special.
I wasn't at the meeting that's being referred to, but I thought we previously
agreed that tmpfs *is* special because in some configurations its not backed by
swap so is locked in ram?
There are multiple things to that, like:
* Machines only having limited/no swap configured
* tmpfs can be configured to never go to swap
* memfd/tmpfs files getting used purely for mmap(): there is no real
difference to MAP_ANON|MAP_SHARE besides the processes we share that
memory with.
Especially when it comes to memory waste concerns and access behavior in
some cases, tmpfs behaved much more like anonymous memory. But there are for
sure other use cases where tmpfs is not that special.
Having controls to select the allowable folio order allocations for
tmpfs does not address any of these issues. The suggested filesystem
approach [1] involves allocating orders in larger chunks, but always
the same size you would allocate when using order-0 folios.
Well you can't know that you will never allocate more. If you allocate a 2M
In the fs large folio approach implementation [1], the allocation of a 2M (or
any non order-0) occurs when the size of the write/fallocate is 2M (and index
is aligned).