Re: [RFC v2 0/4] fs/locks: Use plain percpu spinlocks instead of lglock to protect file_lock
From: Jeff Layton
Date: Mon Mar 02 2015 - 10:24:01 EST
On Mon, 2 Mar 2015 15:25:09 +0100
Daniel Wagner <daniel.wagner@xxxxxxxxxxxx> wrote:
> Hi Jeff,
>
> I've dropped the spinlock conversion for the time beeing. Maybe the
> last patch which changes the usage of blocked_lock_lock is still
> useful. And in case I can convince of the spinlock conversion it can
> easliy done on top of it. I think it makes it also simpler to review
> doing it this after all.
>
> cheers,
> daniel
>
> v2:
> - added a few lockdep assertion
> - dropped spinlock conversion
>
> v1:
> - rebased on v3.19-8975-g3d88348
> - splittet into smaller pieces
> - fixed a wrong usage of __locks_insert/delete_block() and it's posix version
> - added seqfile helpers to avoid ugly open coded version
>
>
> Original cover letter:
>
> I am looking at how to get rid of lglock. Reason being -rt is not too
> happy with that lock, especially that it uses arch_spinlock_t and
> therefore it is not changed into a mutex on -rt. I know no change is
> accepted only fixing something for -rt alone. So here my attempt to
> make things faster for mainline and fixing -rt.
>
> There are two users of lglock at this point. fs/locks.c and
> kernel/stop_machine.c
>
> I presume the fs/locks is the more interesting one in respect of
> performance. Let's have a look at that one first.
>
> The lglock version of file_lock_lock is used in combination of
> blocked_lock_lock to protect file_lock's fl_link, fl_block, fl_next,
> blocked_hash and the percpu file_lock_list.
>
> The plan is to reorganize the usage of the locks and what they protect
> so that the usage of the global blocked_lock_lock is reduced.
>
> Whenever we insert a new lock we are going to grab besides the i_lock
> also the corresponding percpu file_lock_lock. The global
> blocked_lock_lock is only used when blocked_hash is involved.
>
> file_lock_list exists to be being able to produce the content of
> /proc/locks. For listing the all locks it seems a bit excessive to
> grab all locks at once. We should be okay just grabbing the
> corresponding lock when iterating over the percpu file_lock_list.
>
> file_lock_lock protects now file_lock_list and fl_link, fl_block and
> fl_next allone. That means we need to define which file_lock_lock is
> used for all waiters. Luckely, fl_link_cpu can be reused for fl_block
> and fl_next.
>
> I haven't found a good way around for the open coded seq_ops
> (locks_start, locks_next, locks_stop). Maybe someone has good idea how
> to handle with the locks.
>
> For performance testing I used
> git://git.samba.org/jlayton/lockperf.git and for correctness
> https://github.com/linux-test-project/ltp/tree/master/testcases/network/nfsv4/locks
> In case you are missing the posix03 results, my machine doesn't like
> it too much. The load brings it to its knees due to the very high
> load. Propably I need different parameters.
>
> I didn't run excessive tests so far, because I am waiting for getting
> access on a bigger box compared to my small i7-4850HQ system. I hope
> to see larger improvements when there are more cores involved.
>
> [...]
>
> Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx>
> Cc: Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
> Cc: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
> Cc: linux-fsdevel@xxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
>
> Daniel Wagner (4):
> locks: Remove unnecessary IS_POSIX test
> locks: Add lockdep assertion for blocked_lock_lock
> locks: Split insert/delete block functions into flock/posix parts
> locks: Use blocked_lock_lock only to protect blocked_hash
>
> fs/locks.c | 124 +++++++++++++++++++++++++++++++++++++++++--------------------
> 1 file changed, 84 insertions(+), 40 deletions(-)
>
These look good at first glance, but I do need to go over patches 3 and
4 in more detail.
FWIW, usually when I see "RFC" in the subject, I take it as a hint that
this is still work-in-progress and that you're looking for early feedback
on it, and hence they it shouldn't be merged yet. Is that the case
here, or would I be OK to merge these?
--
Jeff Layton <jlayton@xxxxxxxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/