Re: [RFC PATCH 00/11] pcache: Persistent Memory Cache for Block Devices
From: Dan Williams
Date: Tue Apr 15 2025 - 14:01:04 EST
Dongsheng Yang wrote:
> Hi All,
>
> This patchset introduces a new Linux block layer module called
> **pcache**, which uses persistent memory (pmem) as a cache for block
> devices.
>
> Originally, this functionality was implemented as `cbd_cache` within the
> CBD (CXL Block Device). However, after thorough consideration,
> it became clear that the cache design was not limited to CBD's pmem
> device or infrastructure. Instead, it is broadly applicable to **any**
> persistent memory device that supports DAX. Therefore, I have split
> pcache out of cbd and refactored it into a standalone module.
>
> Although Intel's Optane product line has been discontinued, the Storage
> Class Memory (SCM) field continues to evolve. For instance, Numemory
> recently launched their Optane successor product, the NM101 SCM:
> https://www.techpowerup.com/332914/numemory-releases-optane-successor-nm101-storage-class-memory
>
> ### About pcache
>
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Feature | pcache | bcache | dm-writecache |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | pmem access method | DAX | bio | DAX |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Write Latency (4K randwrite) | ~7us | ~20us | ~7us |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Concurrency | Multi-tree per backend, | Shared global index tree, | single indexing tree and |
> | | fully utilizing pmem | | global wc_lock |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | IOPS (4K randwrite 32 numjobs)| 2107K | 352K | 283K |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Read Cache Support | YES | YES | NO |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Deployment Flexibility | No reformat needed | Requires formatting backend | Depends on dm framework, |
> | | | devices | less intuitive to deploy |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Writeback Model | log-structure; preserves | no guarantee between | no guarantee writeback |
> | | backing crash-consistency; | flush order and app IO order;| ordering |
> | | important for checkpoint | may lose ordering in backing | |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
> | Data Integrity | CRC on both metadata and | CRC on metadata only | No CRC |
> | | data (data crc is optional) | | |
> +-------------------------------+------------------------------+------------------------------+------------------------------+
Thanks for making the comparison chart. The immediate question this
raises is why not add "multi-tree per backend", "log structured
writeback", "readcache", and "CRC" support to dm-writecache?
device-mapper is everywhere, has a long track record, and enhancing it
immediately engages a community of folks in this space.
Then reviewers can spend the time purely on the enhancements and not
reviewing a new block device-management stacking ABI.