Re: Linux 5.16-rc1

From: Anton Altaparmakov
Date: Wed Nov 17 2021 - 20:27:39 EST


Hi Linus,

> On 18 Nov 2021, at 00:28, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Wed, Nov 17, 2021 at 3:29 PM Anton Altaparmakov <anton@xxxxxxxxxx> wrote:
>>
>> What we need here is an array to store pointers to in-memory inodes that correspond to inodes in the inode table page being written out.
>
> Do we actually need the array?
>
> The ntfs_inode pointers in that array are always locked (using
> 'mrec_lock'), so ti could be just a linked list of entries.
>
> Yeah, that would require adding a 'next' pointer to 'struct
> _ntfs_inode', but maybe that would be the right thing to do?
>
> I don't know the code, but it looks to me like it's literally just a
> stack of locked ntfs_inode pointers - where the lock is taken before
> adding it to the stack, and released after taking it off the stack. So
> a singly-linked list would seem to be a very simple implementation.

Thanks for the idea. Yes, you are correct. That would be a viable alternative at the cost of that extra pointer in the ntfs_inode structure.

I am concerned that whilst this would fix this compiler warning, we have other such arrays in fs/ntfs/mft.c::write_mft_record_nolock() and ntfs_sync_mft_mirror() where in each of those functions we have:

struct buffer_head *bhs[MAX_BHS];

And at the top of mft.c we have:

#define MAX_BHS (PAGE_SIZE / NTFS_BLOCK_SIZE)

So those arrays are each the same size as the one the compiler warns about in fs/ntfs/aops.c::ntfs_write_mst_block() where we have:

ntfs_inode *locked_nis[PAGE_SIZE / NTFS_BLOCK_SIZE];

So is it worth doing the singly linked list to fix one file only to have compilation fail a few files later when it gets to mft.c?

Best regards,

Anton
--
Anton Altaparmakov <anton at tuxera.com> (replace at with @)
Lead in File System Development, Tuxera Inc., http://www.tuxera.com/
Linux NTFS maintainer