Re: [PATCH V10 00/10] famfs: port into fuse
From: Darrick J. Wong
Date: Fri Apr 17 2026 - 12:00:46 EST
On Fri, Apr 17, 2026 at 01:17:11AM -0700, Christoph Hellwig wrote:
> On Thu, Apr 16, 2026 at 10:40:31PM -0700, Darrick J. Wong wrote:
> > > > ...the memory interleaving is a rather interesting quality of famfs.
> > > > There's no good way to express a formulaic meta-mapping in traditional
> > > > iomap parlance, and famfs needs that to interleave across memory
> > > > controllers/dimm boxen/whatever. Throwing individual iomaps at the
> > > > kernel is a very inefficient way to do that. So I don't think there's a
> > > > good reason to get rid of GET_FMAP at this time...
> > >
> > > So could we make the interleaving part generic then? Striped /
> > > interleaved layouts are used elsewhere (eg RAID-0, md-stripe, etc.) -
> > > could we add a generic interleave descriptor to the uapi and use that
> > > for what famfs needs?
> >
> > I doubt it. md-raid presents a unified LBA address space, which means
> > that the filesystem doesn't have to know anything about whatever
> > translations might happen underneath it.
>
> Unless that translation happens in the file system. It does for btrfs
> right now, and it does for pNFS blocklayout. The former is using iomap
> for direct I/O (and has old code and vague plans for using it for
> buffered I/O maybe eventually), the latter does not currently but would
> benefit a lot, although wiring it through the NFS code will be painful.
Not to mention a huge layering violation unless you're doing xraid. ;)
That said, the fuse-iomap patches have been waiting for a review since
October, and I'd really prefer to get the base enablement of iomap
merged before we start asking about things that existing fuse servers
and iomap client filesystems don't do, like in-filesystem raid.
> > Most filesystems that implement striping themselves don't restrict
> > themselves to monotonically increasing LBA ranges rotored across each
> > device like md-raid0 does.
>
> Mappings can be more flexible, but they usually would not in a single
> iomap iteration.
>
> > But for whatever reason, pmem/dax don't have remapping layers like
> > md/dm so filesystems have to do that on their own if the hardware
> > doesn't do it for them.
>
> DM actually supports DAX. I don't think that's a very good way as it
> adds a lot of overhead for little gain for striping.
Aha, it has long been my suspicion that looping through mapping layers
is a real performance pit for memory-based file stores. Thanks for
saying that explicitly.
--D