Re: [RFC patch 0/5] futex: Allow lockless empty check of hashbucketplist in futex_wake()
From: Ingo Molnar
Date: Sun Dec 01 2013 - 11:56:12 EST
* Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Sun, Dec 01, 2013 at 01:10:22PM +0100, Ingo Molnar wrote:
>
> > But more importantly, since these are all NUMA systems, would it
> > make sense to create per node hashes on NUMA? Each futex would be
> > enqueued into the hash belonging to its own page's node.
>
> Can't do that; we hash on vaddr, the actual page can move between
> nodes while a futex is queued.
Hm, indeed. We used to hash on the physical address - the very first
futex version from Rusty did:
+static inline struct list_head *hash_futex(struct page *page,
+ unsigned long offset)
+{
+ unsigned long h;
+
+ /* struct page is shared, so we can hash on its address */
+ h = (unsigned long)page + offset;
+ return &futex_queues[hash_long(h, FUTEX_HASHBITS)];
+}
But this was changed to uaddr keying in:
69e9c9b518fc [PATCH] Unpinned futexes v2: indexing changes
(commit from the linux historic git tree.)
I think this design aspect could perhaps be revisited/corrected - in
what situations can a page move from under a futex? Only via the
memory migration system calls, or are there other channels as well?
Swapping should not affect the address, as the pages are pinned,
right?
Keeping the page invariant would bring significant performance
advantages to hashing.
> This would mean that the waiting futex is queued on another node
> than the waker is looking.
Yeah, that cannot work.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/