[PATCH v2 00/79] SSDFS: noGC ZNS/FDP-friendly LFS file system
From: Viacheslav Dubeyko
Date: Sun Mar 15 2026 - 22:19:34 EST
Hello,
Complete patchset is available here:
https://github.com/dubeyko/ssdfs-driver/tree/master/patchset/linux-kernel-6.18.0
[PROBLEM DECLARATION]
SSD is a sophisticated device capable of managing in-place
updates. However, in-place updates generate significant FTL GC
responsibilities that increase write amplification factor, require
substantial NAND flash overprovisioning, decrease SSD lifetime,
and introduce performance spikes. Log-structured File System (LFS)
approach can introduce a more flash-friendly Copy-On-Write (COW) model.
However, F2FS and NILFS2 issue in-place updates anyway, even by using
the COW policy for main volume area. Also, GC is an inevitable subsystem
of any LFS file system that introduces write amplification, retention
issue, excessive copy operations, and performance degradation for
aged volume. Generally speaking, available file system technologies
have side effects: (1) write amplification issue, (2) significant FTL GC
responsibilities, (3) inevitable FS GC overhead, (4) read disturbance,
(5) retention issue. As a result, SSD lifetime reduction, perfomance
degradation, early SSD failure, and increased TCO cost are reality of
data infrastructure.
[WHY YET ANOTHER FS?]
QLC NAND flash makes really tough requirements for file systems
to be really flash friendly. ZNS SSD and FDP SSD technologies try
to help to manage these strict requirements. But, anyway, file
systems need to play properly to employ ZNS/FDP SSDs benefits.
Ideally, file system needs to use only Copy-On-Write (COW) or
append-only policy, to be a Log-structured (LFS) file system,
and to be capable to work without GC subsystem. However, F2FS
and NILFS2 file systems heavily rely on GC subsystem that
inevitably increase write amplification. These file systems
use the COW policy not for the whole volume.
Generally speaking, it will be good to see LFS file system architecture
that is capable:
(1) eliminate FS GC overhead,
(2) decrease/eliminate FTL GC responsibilities,
(3) decrease write amplification factor,
(4) introduce native architectural support of ZNS SSD + SMR HDD,
(5) increase compression ratio by using delta-encoding and deduplication,
(6) introduce smart management of "cold" data and efficient TRIM policy,
(7) employ parallelism of multiple NAND dies/channels,
(8) prolong SSD lifetime and decrease TCO cost,
(9) guarantee strong reliability and capability to reconstruct heavily
corrupted file system volume,
(10) guarantee stable performance.
SSDFS is an open-source, kernel-space LFS file system designed:
(1) eliminate GC overhead, (2) prolong SSD lifetime, (3) natively support
a strict append-only mode (ZNS SSD + SMR HDD compatible), (4) guarantee
strong reliability, (5) guarantee stable performance.
[SSDFS ARCHITECTURE]
One of the key goals of SSDFS is to decrease the write amplification
factor. Logical extent concept is the fundamental technique to achieve
the goal. Logical extent describes any volume extent on the basis of
{segment ID, logical block ID, and length}. Segment is a portion of
file system volume that has to be aligned on erase block size and
always located at the same offset. It is basic unit to allocate and
to manage free space of file system volume. Every segment can include
one or several Logical Erase Blocks (LEB). LEB can be mapped into
"Physical" Erase Block (PEB). Generally speaking, PEB is fixed-sized
container that includes a number of logical blocks (physical sectors
or NAND flash pages). SSDFS is pure Log-structured File System (LFS).
It means that any write operation into erase block is the creation of
log. Content of every erase block is a sequence of logs. PEB has block
bitmap with the goal of tracking the state (free, pre-allocated,
allocated, invalid) of logical blocks and to account the physical space
is used for storing log's metadata (segment header, partial log header,
footer). Also, log contains an offset translation table that converts
logical block ID into particular offset inside of log's payload.
Log concept implements a support of compression, delta-encoding,
and compaction scheme. As a result, it provides the way: (1) decrease
write amplification, (2) decrease FTL GC responsibilities, (3) improve
compression ration and decrease payload size. Finally, SSD lifetime
can be longer and write I/O performance can be improved.
SSDFS file system is based on the concept of logical segment that
is the aggregation of Logical Erase Blocks (LEB). Moreover, initially,
LEB hasn’t association with a particular "Physical" Erase Block (PEB).
It means that segment could have the association not for all LEBs or,
even, to have no association at all with any PEB (for example, in the
case of clean segment). Generally speaking, SSDFS file system needs
a special metadata structure (PEB mapping table) that is capable of
associating any LEB with any PEB. The PEB mapping table is the crucial
metadata structure that has several goals: (1) mapping LEB to PEB,
(2) implementation of the logical extent concept, (3) implementation of
the concept of PEB migration, (4) implementation of the delayed erase
operation by specialized thread.
SSDFS implements a migration scheme. Migration scheme is a fundamental
technique of GC overhead management. The key responsibility of the
migration scheme is to guarantee the presence of data in the same segment
for any update operations. Generally speaking, the migration scheme’s model
is implemented on the basis of association an exhausted "Physical" Erase
Block (PEB) with a clean one. The goal of such association of two PEBs is
to implement the gradual migration of data by means of the update
operations in the initial (exhausted) PEB. As a result, the old, exhausted
PEB becomes invalidated after complete data migration and it will be
possible to apply the erase operation to convert it to a clean state.
The migration scheme is capable of decreasing GC activity significantly
by means of excluding the necessity to update metadata and by means of
self-migration of data between PEBs is triggered by regular update
operations. Finally, migration scheme can: (1) eliminate GC overhead,
(2) implement efficient TRIM policy, (3) prolong SDD lifetime,
(4) guarantee stable performance.
Generally speaking, SSDFS doesn't need a classical model of garbage
collection that is used in NILFS2 or F2FS. However, SSDFS has several
global GC threads (dirty, pre-dirty, used, using segment states) and
segment bitmap. The main responsibility of global GC threads is:
(1) find segment in a particular state, (2) check that segment object
is constructed and initialized by file system driver logic,
(3) check the necessity to stimulate or finish the migration
(if segment is under update operations or has update operations
recently, then migration stimulation is not necessary),
(4) define valid blocks that require migration, (5) add recommended
migration request to PEB update queue, (6) destroy in-core segment
object if no migration is necessary and no create/update requests
have been received by segment object recently. Global GC threads are
used to recommend migration stimulation for particular PEBs and
to destroy in-core segment objects that have no requests for
processing. Segment bitmap is the critical metadata structure of
SSDFS file system that implements several goals: (1) searching for
a candidate for a current segment capable of storing new data,
(2) searching by GC subsystem for the most optimal segment (dirty
state, for example) with the goal of preparing the segment in
background for storing new data (converting in a clean state).
SSDFS file system uses b-tree architecture for metadata representation
(for example, inodes tree, extents tree, dentries tree, xattr tree)
because it provides the compact way of reserving the metadata space
without the necessity to use the excessive overprovisioning of
metadata reservation (for example, in the case of plain table or array).
SSDFS file system uses a hybrid b-tree architecture with the goal
to eliminate the index nodes’ side effect. The hybrid b-tree operates by
three node types: (1) index node, (2) hybrid node, (3) leaf node.
Generally speaking, the peculiarity of hybrid node is the mixture
as index as data records into one node. Hybrid b-tree starts with
root node that is capable to keep the two index records or two data
records inline (if size of data record is equal or lesser than size
of index record). If the b-tree needs to contain more than two items
then it should be added the first hybrid node into the b-tree.
The root level of b-tree is able to contain only two nodes because
the root node is capable to store only two index records. Generally speaking,
the initial goal of hybrid node is to store the data records in
the presence of reserved index area. B-tree implements compact and
flexible metadata structure that can decrease payload size and
isolate hot, warm, and cold metadata types in different erase blocks.
Migration scheme is completely enough for the case of conventional SSDs
as for metadata as for user data. But ZNS SSD has huge zone size and
limited number of active/open zones. As a result, it requires introducing
a moving scheme for user data in the case of ZNS SSD. Finally, migration
scheme works for metadata and moving scheme works for user data
(ZNS SSD case). Initially, user data can be stored into current user
data segment/zone. And user data can be updated at the same zone until
exhaustion. Next, moving scheme starts to work. Updated user data is moved
into current user data zone for updates. As a result, it needs to update
the extents tree and to store invalidated extents of old zone into
invalidated extents tree. Invalidated extents tree needs to track
the moment when the old zone is completely invalidated and is ready
to be erased.
[BENCHMARKING]
Benchmarking results show that SSDFS is capable:
(1) generate smaller amount of write I/O requests compared with:
1.4x - 116x (ext4),
14x - 42x (xfs),
6.2x - 9.8x (btrfs),
1.5x - 41x (f2fs),
0.6x - 22x (nilfs2);
(2) create smaller payload compared with:
0.3x - 300x (ext4),
0.3x - 190x (xfs),
0.7x - 400x (btrfs),
1.2x - 400x (f2fs),
0.9x - 190x (nilfs2);
(3) decrease the write amplification factor compared with:
1.3x - 116x (ext4),
14x - 42x (xfs),
6x - 9x (btrfs),
1.5x - 50x (f2fs),
1.2x - 20x (nilfs2);
(4) prolong SSD lifetime compared with:
1.4x - 7.8x (ext4),
15x - 60x (xfs),
6x - 12x (btrfs),
1.5x - 7x (f2fs),
1x - 4.6x (nilfs2).
v2
(*) File system code has been completely switched on memory folios.
(*) PEB-based deduplication model has been introduced.
(*) 8K, 16K, 32K logical block size support has been stabilized.
(*) PEB inflation model has been implemented.
(*) Shared dictionary and b-tree subsystem has been reworked significantly.
[CURRENT ISSUES]
(*) FSCK tool is not fully implemented.
(*) Multiple issues during xfstests run.
(*) ZNS SSD + SMR HDD support is not stable.
(*) Multiple erase blocks in segment model functionality is not stable.
(*) Collaboration of PEB inflation model, migration scheme, and
moving scheme has issues.
[TODO]
(*) Multi-drive support.
[REFERENCES]
[1] SSDFS tools: https://github.com/dubeyko/ssdfs-tools.git
[2] SSDFS driver: https://github.com/dubeyko/ssdfs-driver.git
[3] Linux kernel with SSDFS support: https://github.com/dubeyko/linux.git
[4] SSDFS (paper): https://arxiv.org/abs/1907.11825
[5] Embedded Linux 2022: https://www.youtube.com/watch?v=x5gklnkvi_Q
[6] Linux Plumbers 2022: https://www.youtube.com/watch?v=sBGddJBHsIo
[7] Why do you need SSDFS?: https://www.youtube.com/watch?v=7b_vrtRvsGM
[8] Linux Plumbers 2024: https://www.youtube.com/watch?v=0_f1kD7fGnE
Viacheslav Dubeyko (79):
ssdfs: introduce SSDFS on-disk layout
ssdfs: add key file system declarations
ssdfs: add key file system's function declarations
ssdfs: implement raw device operations
ssdfs: implement basic read/write primitives
ssdfs: implement super operations
ssdfs: implement commit superblock logic
ssdfs: segment header + log footer operations
ssdfs: add declaration of functions for superblock search
ssdfs: basic mount logic implementation
ssdfs: implement folio vector
ssdfs: implement dynamic array
ssdfs: implement sequence array
ssdfs: implement folio array
ssdfs: introduce PEB's block bitmap
ssdfs: implement PEB's block bitmap functionality
ssdfs: implement support of migration scheme in PEB bitmap
ssdfs: implement functionality of migration scheme in PEB bitmap
ssdfs: introduce segment block bitmap
ssdfs: implement functionality of segment block bitmap
ssdfs: introduce segment request queue
ssdfs: introduce offset translation table
ssdfs: implement offsets translation table functionality
ssdfs: introduce PEB object
ssdfs: implement PEB object functionality
ssdfs: implement compression logic support
ssdfs: introduce PEB container
ssdfs: implement PEB container functionality
ssdfs: implement migration scheme
ssdfs: PEB read thread logic
ssdfs: PEB flush thread's finite state machine
ssdfs: auxilairy GC threads logic
ssdfs: introduce segment object
ssdfs: implement segment object's functionality
ssdfs: implement current segment functionality
ssdfs: implement segment tree functionality
ssdfs: introduce PEB mapping queue
ssdfs: introduce PEB mapping table
ssdfs: implement PEB mapping table functionality
ssdfs: introduce PEB mapping table cache
ssdfs: implement PEB mapping table cache logic
ssdfs: introduce segment bitmap
ssdfs: implement segment bitmap's functionality
ssdfs: introduce b-tree object
ssdfs: implement b-tree object's functionality
ssdfs: introduce b-tree node object
ssdfs: implement b-tree node's functionality
ssdfs: introduce b-tree hierarchy object
ssdfs: implement b-tree hierarchy logic
ssdfs: introduce inodes b-tree
ssdfs: implement inodes b-tree functionality
ssdfs: introduce dentries b-tree
ssdfs: implement dentries b-tree functionality
ssdfs: introduce extents queue object
ssdfs: introduce extents b-tree
ssdfs: implement extents b-tree functionality
ssdfs: introduce invalidated extents b-tree
ssdfs: implement invalidated extents b-tree functionality
ssdfs: introduce shared extents b-tree
ssdfs: implement shared extents b-tree functionality
ssdfs: introduce PEB-based deduplication technique
ssdfs: introduce shared dictionary b-tree
ssdfs: implement shared dictionary b-tree functionality
ssdfs: implement snapshot requests queue functionality
ssdfs: introduce snapshots b-tree
ssdfs: implement snapshots b-tree functionality
ssdfs: implement extended attributes support
ssdfs: implement extended attributes b-tree functionality
ssdfs: introduce Diff-On-Write approach
ssdfs: implement sysfs support
ssdfs: implement IOCTL operations
ssdfs: introduce online FSCK stub logic
ssdfs: introduce application-based unit-tests
ssdfs: introduce Kunit-based unit-tests
ssdfs: implement inode operations support
ssdfs: implement directory operations support
ssdfs: implement file operations support
ssdfs: implement initial support of tunefs operations
Introduce SSDFS file system
fs/Kconfig | 1 +
fs/Makefile | 1 +
fs/ssdfs/.kunitconfig | 9 +
fs/ssdfs/Kconfig | 408 +
fs/ssdfs/Makefile | 63 +
fs/ssdfs/acl.c | 260 +
fs/ssdfs/acl.h | 54 +
fs/ssdfs/block_bitmap.c | 6948 +++++++
fs/ssdfs/block_bitmap.h | 393 +
fs/ssdfs/block_bitmap_tables.c | 311 +
fs/ssdfs/block_bitmap_test.c | 2380 +++
fs/ssdfs/btree.c | 8506 +++++++++
fs/ssdfs/btree.h | 219 +
fs/ssdfs/btree_hierarchy.c | 11632 ++++++++++++
fs/ssdfs/btree_hierarchy.h | 336 +
fs/ssdfs/btree_node.c | 18780 ++++++++++++++++++
fs/ssdfs/btree_node.h | 891 +
fs/ssdfs/btree_search.c | 1114 ++
fs/ssdfs/btree_search.h | 424 +
fs/ssdfs/common_bitmap.h | 230 +
fs/ssdfs/compr_lzo.c | 268 +
fs/ssdfs/compr_lzo_test.c | 570 +
fs/ssdfs/compr_zlib.c | 374 +
fs/ssdfs/compr_zlib_test.c | 401 +
fs/ssdfs/compression.c | 569 +
fs/ssdfs/compression.h | 108 +
fs/ssdfs/compression_test.c | 310 +
fs/ssdfs/current_segment.c | 949 +
fs/ssdfs/current_segment.h | 116 +
fs/ssdfs/dentries_tree.c | 10485 ++++++++++
fs/ssdfs/dentries_tree.h | 158 +
fs/ssdfs/dev_bdev.c | 1065 ++
fs/ssdfs/dev_mtd.c | 650 +
fs/ssdfs/dev_zns.c | 1344 ++
fs/ssdfs/diff_on_write.c | 158 +
fs/ssdfs/diff_on_write.h | 106 +
fs/ssdfs/diff_on_write_metadata.c | 2969 +++
fs/ssdfs/diff_on_write_user_data.c | 851 +
fs/ssdfs/dir.c | 2197 +++
fs/ssdfs/dynamic_array.c | 1594 ++
fs/ssdfs/dynamic_array.h | 103 +
fs/ssdfs/dynamic_array_test.c | 660 +
fs/ssdfs/extents_queue.c | 2013 ++
fs/ssdfs/extents_queue.h | 110 +
fs/ssdfs/extents_tree.c | 15349 +++++++++++++++
fs/ssdfs/extents_tree.h | 188 +
fs/ssdfs/file.c | 4341 +++++
fs/ssdfs/fingerprint.h | 261 +
fs/ssdfs/fingerprint_array.c | 795 +
fs/ssdfs/fingerprint_array.h | 82 +
fs/ssdfs/folio_array.c | 1781 ++
fs/ssdfs/folio_array.h | 146 +
fs/ssdfs/folio_array_test.c | 1107 ++
fs/ssdfs/folio_vector.c | 523 +
fs/ssdfs/folio_vector.h | 70 +
fs/ssdfs/folio_vector_test.c | 495 +
fs/ssdfs/fs_error.c | 265 +
fs/ssdfs/global_fsck.c | 598 +
fs/ssdfs/inode.c | 1262 ++
fs/ssdfs/inodes_tree.c | 6261 ++++++
fs/ssdfs/inodes_tree.h | 181 +
fs/ssdfs/invalidated_extents_tree.c | 7128 +++++++
fs/ssdfs/invalidated_extents_tree.h | 96 +
fs/ssdfs/ioctl.c | 453 +
fs/ssdfs/ioctl.h | 58 +
fs/ssdfs/log_footer.c | 991 +
fs/ssdfs/offset_translation_table.c | 12175 ++++++++++++
fs/ssdfs/offset_translation_table.h | 459 +
fs/ssdfs/options.c | 170 +
fs/ssdfs/peb.c | 1120 ++
fs/ssdfs/peb.h | 600 +
fs/ssdfs/peb_block_bitmap.c | 5740 ++++++
fs/ssdfs/peb_block_bitmap.h | 179 +
fs/ssdfs/peb_container.c | 6605 +++++++
fs/ssdfs/peb_container.h | 631 +
fs/ssdfs/peb_deduplication.c | 483 +
fs/ssdfs/peb_flush_thread.c | 24221 ++++++++++++++++++++++++
fs/ssdfs/peb_fsck_thread.c | 242 +
fs/ssdfs/peb_gc_thread.c | 3734 ++++
fs/ssdfs/peb_init.c | 1338 ++
fs/ssdfs/peb_init.h | 364 +
fs/ssdfs/peb_mapping_queue.c | 340 +
fs/ssdfs/peb_mapping_queue.h | 68 +
fs/ssdfs/peb_mapping_table.c | 13868 ++++++++++++++
fs/ssdfs/peb_mapping_table.h | 784 +
fs/ssdfs/peb_mapping_table_cache.c | 4897 +++++
fs/ssdfs/peb_mapping_table_cache.h | 120 +
fs/ssdfs/peb_mapping_table_thread.c | 2959 +++
fs/ssdfs/peb_migration_scheme.c | 1445 ++
fs/ssdfs/peb_read_thread.c | 14978 +++++++++++++++
fs/ssdfs/readwrite.c | 973 +
fs/ssdfs/recovery.c | 3706 ++++
fs/ssdfs/recovery.h | 451 +
fs/ssdfs/recovery_fast_search.c | 1200 ++
fs/ssdfs/recovery_slow_search.c | 587 +
fs/ssdfs/recovery_thread.c | 1215 ++
fs/ssdfs/request_queue.c | 1726 ++
fs/ssdfs/request_queue.h | 818 +
fs/ssdfs/segment.c | 8525 +++++++++
fs/ssdfs/segment.h | 1367 ++
fs/ssdfs/segment_bitmap.c | 5157 +++++
fs/ssdfs/segment_bitmap.h | 482 +
fs/ssdfs/segment_bitmap_tables.c | 887 +
fs/ssdfs/segment_block_bitmap.c | 1929 ++
fs/ssdfs/segment_block_bitmap.h | 240 +
fs/ssdfs/segment_tree.c | 996 +
fs/ssdfs/segment_tree.h | 107 +
fs/ssdfs/sequence_array.c | 1160 ++
fs/ssdfs/sequence_array.h | 140 +
fs/ssdfs/shared_dictionary.c | 21342 +++++++++++++++++++++
fs/ssdfs/shared_dictionary.h | 204 +
fs/ssdfs/shared_dictionary_thread.c | 457 +
fs/ssdfs/shared_extents_tree.c | 6866 +++++++
fs/ssdfs/shared_extents_tree.h | 146 +
fs/ssdfs/shared_extents_tree_thread.c | 808 +
fs/ssdfs/snapshot.c | 99 +
fs/ssdfs/snapshot.h | 283 +
fs/ssdfs/snapshot_requests_queue.c | 1249 ++
fs/ssdfs/snapshot_requests_queue.h | 65 +
fs/ssdfs/snapshot_rules.c | 739 +
fs/ssdfs/snapshot_rules.h | 55 +
fs/ssdfs/snapshots_tree.c | 8917 +++++++++
fs/ssdfs/snapshots_tree.h | 248 +
fs/ssdfs/snapshots_tree_thread.c | 665 +
fs/ssdfs/ssdfs.h | 502 +
fs/ssdfs/ssdfs_constants.h | 214 +
fs/ssdfs/ssdfs_fs_info.h | 821 +
fs/ssdfs/ssdfs_inline.h | 3037 +++
fs/ssdfs/ssdfs_inode_info.h | 144 +
fs/ssdfs/ssdfs_thread_info.h | 43 +
fs/ssdfs/super.c | 4873 +++++
fs/ssdfs/sysfs.c | 6558 +++++++
fs/ssdfs/sysfs.h | 305 +
fs/ssdfs/testing.c | 5949 ++++++
fs/ssdfs/testing.h | 226 +
fs/ssdfs/tunefs.c | 487 +
fs/ssdfs/version.h | 9 +
fs/ssdfs/volume_header.c | 1431 ++
fs/ssdfs/xattr.c | 1700 ++
fs/ssdfs/xattr.h | 88 +
fs/ssdfs/xattr_security.c | 159 +
fs/ssdfs/xattr_tree.c | 10132 ++++++++++
fs/ssdfs/xattr_tree.h | 143 +
fs/ssdfs/xattr_trusted.c | 93 +
fs/ssdfs/xattr_user.c | 93 +
include/linux/ssdfs_fs.h | 3565 ++++
include/trace/events/ssdfs.h | 256 +
include/uapi/linux/magic.h | 1 +
include/uapi/linux/ssdfs_fs.h | 126 +
149 files changed, 335803 insertions(+)
create mode 100644 fs/ssdfs/.kunitconfig
create mode 100644 fs/ssdfs/Kconfig
create mode 100644 fs/ssdfs/Makefile
create mode 100644 fs/ssdfs/acl.c
create mode 100644 fs/ssdfs/acl.h
create mode 100644 fs/ssdfs/block_bitmap.c
create mode 100644 fs/ssdfs/block_bitmap.h
create mode 100644 fs/ssdfs/block_bitmap_tables.c
create mode 100644 fs/ssdfs/block_bitmap_test.c
create mode 100644 fs/ssdfs/btree.c
create mode 100644 fs/ssdfs/btree.h
create mode 100644 fs/ssdfs/btree_hierarchy.c
create mode 100644 fs/ssdfs/btree_hierarchy.h
create mode 100644 fs/ssdfs/btree_node.c
create mode 100644 fs/ssdfs/btree_node.h
create mode 100644 fs/ssdfs/btree_search.c
create mode 100644 fs/ssdfs/btree_search.h
create mode 100644 fs/ssdfs/common_bitmap.h
create mode 100644 fs/ssdfs/compr_lzo.c
create mode 100644 fs/ssdfs/compr_lzo_test.c
create mode 100644 fs/ssdfs/compr_zlib.c
create mode 100644 fs/ssdfs/compr_zlib_test.c
create mode 100644 fs/ssdfs/compression.c
create mode 100644 fs/ssdfs/compression.h
create mode 100644 fs/ssdfs/compression_test.c
create mode 100644 fs/ssdfs/current_segment.c
create mode 100644 fs/ssdfs/current_segment.h
create mode 100644 fs/ssdfs/dentries_tree.c
create mode 100644 fs/ssdfs/dentries_tree.h
create mode 100644 fs/ssdfs/dev_bdev.c
create mode 100644 fs/ssdfs/dev_mtd.c
create mode 100644 fs/ssdfs/dev_zns.c
create mode 100644 fs/ssdfs/diff_on_write.c
create mode 100644 fs/ssdfs/diff_on_write.h
create mode 100644 fs/ssdfs/diff_on_write_metadata.c
create mode 100644 fs/ssdfs/diff_on_write_user_data.c
create mode 100644 fs/ssdfs/dir.c
create mode 100644 fs/ssdfs/dynamic_array.c
create mode 100644 fs/ssdfs/dynamic_array.h
create mode 100644 fs/ssdfs/dynamic_array_test.c
create mode 100644 fs/ssdfs/extents_queue.c
create mode 100644 fs/ssdfs/extents_queue.h
create mode 100644 fs/ssdfs/extents_tree.c
create mode 100644 fs/ssdfs/extents_tree.h
create mode 100644 fs/ssdfs/file.c
create mode 100644 fs/ssdfs/fingerprint.h
create mode 100644 fs/ssdfs/fingerprint_array.c
create mode 100644 fs/ssdfs/fingerprint_array.h
create mode 100644 fs/ssdfs/folio_array.c
create mode 100644 fs/ssdfs/folio_array.h
create mode 100644 fs/ssdfs/folio_array_test.c
create mode 100644 fs/ssdfs/folio_vector.c
create mode 100644 fs/ssdfs/folio_vector.h
create mode 100644 fs/ssdfs/folio_vector_test.c
create mode 100644 fs/ssdfs/fs_error.c
create mode 100644 fs/ssdfs/global_fsck.c
create mode 100644 fs/ssdfs/inode.c
create mode 100644 fs/ssdfs/inodes_tree.c
create mode 100644 fs/ssdfs/inodes_tree.h
create mode 100644 fs/ssdfs/invalidated_extents_tree.c
create mode 100644 fs/ssdfs/invalidated_extents_tree.h
create mode 100644 fs/ssdfs/ioctl.c
create mode 100644 fs/ssdfs/ioctl.h
create mode 100644 fs/ssdfs/log_footer.c
create mode 100644 fs/ssdfs/offset_translation_table.c
create mode 100644 fs/ssdfs/offset_translation_table.h
create mode 100644 fs/ssdfs/options.c
create mode 100644 fs/ssdfs/peb.c
create mode 100644 fs/ssdfs/peb.h
create mode 100644 fs/ssdfs/peb_block_bitmap.c
create mode 100644 fs/ssdfs/peb_block_bitmap.h
create mode 100644 fs/ssdfs/peb_container.c
create mode 100644 fs/ssdfs/peb_container.h
create mode 100644 fs/ssdfs/peb_deduplication.c
create mode 100644 fs/ssdfs/peb_flush_thread.c
create mode 100644 fs/ssdfs/peb_fsck_thread.c
create mode 100644 fs/ssdfs/peb_gc_thread.c
create mode 100644 fs/ssdfs/peb_init.c
create mode 100644 fs/ssdfs/peb_init.h
create mode 100644 fs/ssdfs/peb_mapping_queue.c
create mode 100644 fs/ssdfs/peb_mapping_queue.h
create mode 100644 fs/ssdfs/peb_mapping_table.c
create mode 100644 fs/ssdfs/peb_mapping_table.h
create mode 100644 fs/ssdfs/peb_mapping_table_cache.c
create mode 100644 fs/ssdfs/peb_mapping_table_cache.h
create mode 100644 fs/ssdfs/peb_mapping_table_thread.c
create mode 100644 fs/ssdfs/peb_migration_scheme.c
create mode 100644 fs/ssdfs/peb_read_thread.c
create mode 100644 fs/ssdfs/readwrite.c
create mode 100644 fs/ssdfs/recovery.c
create mode 100644 fs/ssdfs/recovery.h
create mode 100644 fs/ssdfs/recovery_fast_search.c
create mode 100644 fs/ssdfs/recovery_slow_search.c
create mode 100644 fs/ssdfs/recovery_thread.c
create mode 100644 fs/ssdfs/request_queue.c
create mode 100644 fs/ssdfs/request_queue.h
create mode 100644 fs/ssdfs/segment.c
create mode 100644 fs/ssdfs/segment.h
create mode 100644 fs/ssdfs/segment_bitmap.c
create mode 100644 fs/ssdfs/segment_bitmap.h
create mode 100644 fs/ssdfs/segment_bitmap_tables.c
create mode 100644 fs/ssdfs/segment_block_bitmap.c
create mode 100644 fs/ssdfs/segment_block_bitmap.h
create mode 100644 fs/ssdfs/segment_tree.c
create mode 100644 fs/ssdfs/segment_tree.h
create mode 100644 fs/ssdfs/sequence_array.c
create mode 100644 fs/ssdfs/sequence_array.h
create mode 100644 fs/ssdfs/shared_dictionary.c
create mode 100644 fs/ssdfs/shared_dictionary.h
create mode 100644 fs/ssdfs/shared_dictionary_thread.c
create mode 100644 fs/ssdfs/shared_extents_tree.c
create mode 100644 fs/ssdfs/shared_extents_tree.h
create mode 100644 fs/ssdfs/shared_extents_tree_thread.c
create mode 100644 fs/ssdfs/snapshot.c
create mode 100644 fs/ssdfs/snapshot.h
create mode 100644 fs/ssdfs/snapshot_requests_queue.c
create mode 100644 fs/ssdfs/snapshot_requests_queue.h
create mode 100644 fs/ssdfs/snapshot_rules.c
create mode 100644 fs/ssdfs/snapshot_rules.h
create mode 100644 fs/ssdfs/snapshots_tree.c
create mode 100644 fs/ssdfs/snapshots_tree.h
create mode 100644 fs/ssdfs/snapshots_tree_thread.c
create mode 100644 fs/ssdfs/ssdfs.h
create mode 100644 fs/ssdfs/ssdfs_constants.h
create mode 100644 fs/ssdfs/ssdfs_fs_info.h
create mode 100644 fs/ssdfs/ssdfs_inline.h
create mode 100644 fs/ssdfs/ssdfs_inode_info.h
create mode 100644 fs/ssdfs/ssdfs_thread_info.h
create mode 100644 fs/ssdfs/super.c
create mode 100644 fs/ssdfs/sysfs.c
create mode 100644 fs/ssdfs/sysfs.h
create mode 100644 fs/ssdfs/testing.c
create mode 100644 fs/ssdfs/testing.h
create mode 100644 fs/ssdfs/tunefs.c
create mode 100644 fs/ssdfs/version.h
create mode 100644 fs/ssdfs/volume_header.c
create mode 100644 fs/ssdfs/xattr.c
create mode 100644 fs/ssdfs/xattr.h
create mode 100644 fs/ssdfs/xattr_security.c
create mode 100644 fs/ssdfs/xattr_tree.c
create mode 100644 fs/ssdfs/xattr_tree.h
create mode 100644 fs/ssdfs/xattr_trusted.c
create mode 100644 fs/ssdfs/xattr_user.c
create mode 100644 include/linux/ssdfs_fs.h
create mode 100644 include/trace/events/ssdfs.h
create mode 100644 include/uapi/linux/ssdfs_fs.h
--
2.34.1