Re: [PATCH 02/20] STAGING/lustre: limit follow_link recursion using stack space.
From: Andreas Dilger
Date: Sun Apr 19 2015 - 16:57:28 EST
On Apr 17, 2015, at 9:01 PM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:
>
> On Mon, Mar 23, 2015 at 01:37:38PM +1100, NeilBrown wrote:
>> lustre's ->follow_link() uses a lot of stack space and so
>> need to limit symlink recursion based on stack size.
>>
>> It currently tests current->link_count, but that will soon
>> become private to fs/namei.c.
>> So instead base on actual available stack space.
>> This patch aborts recursive symlinks in less than 2K of space
>> is available. This seems consistent with current code, but
>> hasn't been tested.
>
> BTW, in the best case that logics is fishy. We have "up to 5 levels with
> 4Kb stack and up to 7 with 8Kb one". Could somebody manage to dig out
> the reasons for such limits? Preferably along with the kernel version
> where the overflows had been observed, both for 4K and 8K cases.
Hi Al,
I checked in our bug history, and the 8KB stack limit was hit with
older clients running racer or our recursive-symlink regression test:
2.6.18: https://bugzilla.lustre.org/show_bug.cgi?id=18533#c0
2.6.16: https://bugzilla.lustre.org/show_bug.cgi?id=19380#c11
The 4KB stack limit for clients has existed a lot longer than that,
but CONFIG_4KSTACKS was not the default on all kernels for a while.
The following bug showed a stack overflow with 2.6.22 kernels:
https://bugzilla.lustre.org/show_bug.cgi?id=17379#c0
Prior to 2.6.16 when we needed client-side kernel patches and a custom
kernel build, we always forced the CONFIG_4KSTACKS off in the config.
In general, Lustre is a heavy stack user because it is a network
filesystem, and doubly so if the Lustre client is re-exporting the
filesystem to NFS clients.
I'd be happy if symlink recursion was removed completely, but so far the
added symlink recursion limit hasn't been a problem for Lustre users.
Cheers, Andreas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/