Re: [PATCH v4 1/2] hfsplus: refactor b-tree map page access and add node-type validation
From: Shardul Bankar
Date: Fri Feb 27 2026 - 12:09:56 EST
On Thu, 2026-02-26 at 23:50 +0000, Viacheslav Dubeyko wrote:
> On Thu, 2026-02-26 at 14:42 +0530, Shardul Bankar wrote:
> >
> > +/*
> > + * Maps the page containing the b-tree map record and calculates
> > offsets.
> > + * Automatically handles the difference between header and map
> > nodes.
> > + * Returns the mapped data pointer, or an ERR_PTR on failure.
> > + * Note: The caller is responsible for calling kunmap_local(data).
> > + */
> > +static u8 *hfs_bmap_get_map_page(struct hfs_bnode *node, u16 *off,
> > u16 *len,
> > + unsigned int *page_idx)
>
> I think we don't need in off, len, page_idx arguments here. I suggest
> slightly
> different interface:
>
> u8 hfs_bmap_get_map_byte(struct hfs_bnode *node, u32 bit_index);
> int hfs_bmap_set_map_byte(struct hfs_bnode *node, u32 bit_index, u8
> byte);
>
> In this case memory operations will be atomic ones and all
> kmap_local()/kunmap_local() will be hidden inside these methods.
Hi Slava,
Regarding the get_map_byte/set_map_byte interface: there would be a
severe performance regression if we force
kmap_local_page()/kunmap_local() on a per-byte basis inside the
hfs_bmap_alloc() linear scan. I am providing a detailed breakdown of
this overhead and a proposed alternative in my reply to your review on
Patch 2/2.
> However, I am
> slightly worried that I don't see any locking mechanisms in
> hfs_bmap_alloc(). At
> minimum, I believe we can use lock_page()/unlock_page() here.
> However, it will
> be not enough. It is good for accessing only one page. But we need
> some lock for
> the whole bitmap. Maybe, I am missing something. But if I am not,
> then we have a
> huge room for race conditions in b-tree operations.
Regarding the locking, concurrent access to the allocator is already
prevented by the per-tree tree->tree_lock mutex. Operations that reach
hfs_bmap_alloc() (e.g., node splits via hfs_brec_insert) are executed
within a search context initialized by hfs_find_init(), which holds
mutex_lock(&tree->tree_lock). Therefore, the map nodes are safely
serialized without needing individual lock_page() calls during the
scan.
>
> > for (;;) {
> > while (len) {
> > byte = data[off];
> > if (byte != 0xff) {
> > - for (m = 0x80, i = 0; i < 8; m >>=
> > 1, i++) {
> > + for (m = HFSPLUS_BTREE_NODE0_BIT, i
> > = 0; i < 8; m >>= 1, i++) {
>
> You are not right here. The 0x80 is simply pattern and it's not
> HFSPLUS_BTREE_NODE0_BIT. Because, it could be any byte of the map.
>
Ack'ed. Good catch, I will retain the original 0x80 pattern in the
allocation loop.
Thanks,
Shardul