Re: [ISSUE] Read performance regression when using RWF_DONTCACHE form 8026e49 "mm/filemap: add read support for RWF_DONTCACHE"

From: Andreas Dilger

Date: Wed Apr 15 2026 - 14:00:46 EST


On Apr 15, 2026, at 05:04, Mingyu He <mingyu.he@xxxxxxxxxx> wrote:
>
> Hi Kiryl,
>
> I will list my phy sec and fs block size at the tail of this email.
>
> I have 2 types of hard disk on my Linux. SSD NVME and HDD.
> And I tested buffer_size with range from 1k, 4k, 16k, 64k, 128k. And
> also with/without cgroup.
>
> On both type of hard disk I got same output: RWF_DONTCACHE has very
> low performance.
>
> Strongly guessing this is due to readahead. Pages are dropped after
> reading. So system need another I/O to get the next part of the data.
> However, I dont test the cases with Kswapd strongly working (But this
> is not the core of the question.)
>
>
> I guess this case needs optimization. But I am not sure it needs an
> optimization or just I got wrong using cases, as I am not a proficient
> kernel developer.
> So I need the advice from experts like you to make sure.
> If this is a case worth optimizing, I'd like to do that optimization
> ( But I think many people might have noticed this problem, so I'm not
> sure I could finish the optimization before those proficient
> developers )
>
>
> RWF_DONTCACHE Performance Comparison (MiB/s)
>
> +--------------+-------------+------------------------+------------------+
> | Device Type | Buffer Size | RWF_DONTCACHE (MiB/s) | Normal (MiB/s) |
> +--------------+-------------+------------------------+------------------+
> | HDD | 4K | 119.6 | 2268.1 |
> | HDD | 16K | 1568.6 | 3814.7 |
> | HDD | 64K | 2351.0 | 4161.8 |
> | HDD | 128K | 2951.4 | 4061.0 |
> +--------------+-------------+------------------------+------------------+
> | NVMe | 4K | 148.7 | 1556.1 |
> | NVMe | 16K | 619.0 | 1601.5 |
> | NVMe | 64K | 1139.6 | 1618.6 |
> | NVMe | 128K | 1725.4 | 1579.2
> |- NVMe @ 128K is the only case where RWF_DONTCACHE > Normal
> +--------------+-------------+------------------------+------------------+

If the HDD performance is 4GB/s then it is almost certainly a RAID system
with multiple individual spindles. Reading at 4KB or even 128KB is likely
only reading data from 1-2 spindles at a time. The 4KiB read size is
showing that a single spindle has about 120 IOPS, while modern HDDs have
about 250MB/s bandwidth, so you need to read 2MB/IOP *per spindle* to get
peak performance. For an 8+2 RAID that means 16MB reads would be best.

Cheers, Andreas

> # lsblk -o NAME,FSTYPE,SIZE,FSUSED,FSUSE%,ROTA,MODEL,MOUNTPOINT
>
> NAME FSTYPE SIZE FSUSED FSUSE% ROTA MODEL
> MOUNTPOIN
> sda 1.1T 1 PERC H750 Adp
> ├─sda1 4M 1
> ├─sda2 vfat 110M 6.1M 6% 1
> /boot/efi
> ├─sda3 ext4 2G 517.1M 27% 1 /boot
> └─sda4 xfs 1.1T 70.4G 6% 1 /
> nvme0n1 ext4 1.7T 5G 0% 0 Dell Ent NVMe v2 AGN RI U.2 1.92TB /data
>
>
> # lsblk -o NAME,PHY-SEC,LOG-SEC
> NAME PHY-SEC LOG-SEC
> sda 512 512
> ├─sda1 512 512
> ├─sda2 512 512
> ├─sda3 512 512
> └─sda4 512 512
> nvme0n1 512 512
>
> # dumpe2fs /dev/nvme0n1 | grep "Block size"
> dumpe2fs 1.47.0 (5-Feb-2023)
> Block size: 4096
>
> # xfs_info /
> meta-data=/dev/sda4 isize=512 agcount=566, agsize=516864 blks
> = sectsz=512 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=1, rmapbt=0
> = reflink=1 bigtime=1 inobtcount=1
> data = bsize=4096 blocks=292326651, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1
> log =internal log bsize=4096 blocks=16384, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
>
> On Wed, Apr 15, 2026 at 6:05 PM Kiryl Shutsemau <kirill@xxxxxxxxxxxxx> wrote:
>>
>> On Wed, Apr 15, 2026 at 03:28:27PM +0800, Mingyu He wrote:
>>> The smaller the buffer_size in the test program, the more the
>>> performance dropped. Initially, I used a 4k buffer_size, and the
>>> performance decreased significantly. When the buffer_size was
>>> increased to 128K, the read performance with RWF_DONTCACHE actually
>>> surpassed the non-flagged version by about 10%.
>>
>> Maybe you have block size larger than 4k? Core-mm will allocate larger
>> folios for page cache if filesystem asks it to. And if you try to access
>> it with 4k buffer it gets multiple read-discard cycles for the same
>> block with RWF_DONTCACHE. Without RWF_DONTCACHE only the first access to
>> the block will lead to I/O, following accesses are served from page
>> cache.
>>
>> --
>> Kiryl Shutsemau / Kirill A. Shutemov
>


Cheers, Andreas