Re: (2) [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
From: OGAWA Hirofumi
Date: Tue Jun 30 2020 - 12:26:15 EST
AMIT SAHRAWAT <a.sahrawat@xxxxxxxxxxx> writes:
> There are many implementation that doesn't follow the spec strictly. And
> when I tested in past, Windows also allowed to read the directory beyond
> that limit. I can't recall though if there is in real case or just test
> case though.
>>> Thanks Ogawa, yes there are many implementations, preferably going around with different variants.
> But, using standard linux version on the systems and having such USB connected on such systems is introducing issues(importantly because these being used on Windows also by users).
> I am not sure, if this is something which is new from Windows part.
> But, surely extending the directory beyond limit is causing regression with FAT usage on linux.
regression from what?
> It is making FAT filesystem related storage virtually unresponsive for minutes in these cases,
> and importantly keep on putting pressure on memory due to increasing buffer heads (already a known one with FAT fs).
I'm confused. What happen actually? Now looks like you are saying the
issue is extending size beyond limit. But previously it said the corruption.
Are you saying "beyond that limit" is the fs corruption?
I.e. did you met real directory corruption? or you are trying to limit
because slowness on big directory?
> So if there is no strong reason to apply the limit, I don't think it is
> good to limit it.
>>> The reason for us to share this is because of the unresponsive behaviour observed with FAT fs on our systems.
> This is not a new issue, we have been observing this for quite sometime (may be around 1year+).
> Finally, we got hold of disk which is making this 100% reproducible.
> We thought of applying this to the mainline, as our FAT is aligned with main kernel.
So what was the root cause of slowness on big directory?
Thanks.
--
OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx>