[PATCH] filelock: add stubs for new functions when CONFIG_FILE_LOCKING=n

From: Jeff Layton
Date: Sun Feb 04 2024 - 07:33:27 EST


We recently added several functions to the file locking API. Add stubs
for those functions for when CONFIG_FILE_LOCKING is set to n.

Fixes: 403594111407 ("filelock: add some new helper functions")
Reported-by: kernel test robot <lkp@xxxxxxxxx>
Closes: https://lore.kernel.org/oe-kbuild-all/202402041412.6YvtlflL-lkp@xxxxxxxxx/
Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
---
Just a small follow-on fix for CONFIG_FILE_LOCKING=n builds for the
file_lease split. Christian, it might be best to squash this into
the patch it Fixes.

That said, I'm starting to wonder if we ought to just hardcode
CONFIG_FILE_LOCKING to y. Does anyone ship kernels with it disabled? I
guess maybe people with stripped-down embedded builds might?

Another thought too: "locks_" as a prefix is awfully generic. Might it be
better to rename these new functions with a "filelock_" prefix instead?
That would better distinguish to the casual reader that this is dealing
with a file_lock object. I'm happy to respin the set if that's the
consensus.
---
include/linux/filelock.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 4a5ad26962c1..553d65a88048 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -263,6 +263,27 @@ static inline int fcntl_getlease(struct file *filp)
return F_UNLCK;
}

+static inline bool lock_is_unlock(struct file_lock *fl)
+{
+ return false;
+}
+
+static inline bool lock_is_read(struct file_lock *fl)
+{
+ return false;
+}
+
+static inline bool lock_is_write(struct file_lock *fl)
+{
+ return false;
+}
+
+static inline void locks_wake_up(struct file_lock *fl)
+{
+}
+
+#define for_each_file_lock(_fl, _head) while(false)
+
static inline void
locks_free_lock_context(struct inode *inode)
{

---
base-commit: 1499e59af376949b062cdc039257f811f6c1697f
change-id: 20240204-flsplit3-da666d82b7b4

Best regards,
--
Jeff Layton <jlayton@xxxxxxxxxx>