Re: [f2fs-dev] [PATCH v4] f2fs: compress: add compress_inode to cache compressed blockst

From: Jaegeuk Kim
Date: Wed Apr 21 2021 - 23:59:49 EST


On 04/21, Chao Yu wrote:
> On 2021/3/11 4:52, Jaegeuk Kim wrote:
> > On 03/09, Chao Yu wrote:
> > > On 2021/3/9 8:01, Jaegeuk Kim wrote:
> > > > On 03/05, Chao Yu wrote:
> > > > > On 2021/3/5 4:20, Jaegeuk Kim wrote:
> > > > > > On 02/27, Jaegeuk Kim wrote:
> > > > > > > On 02/04, Chao Yu wrote:
> > > > > > > > Jaegeuk,
> > > > > > > >
> > > > > > > > On 2021/2/2 16:00, Chao Yu wrote:
> > > > > > > > > - for (i = 0; i < dic->nr_cpages; i++) {
> > > > > > > > > + for (i = 0; i < cc->nr_cpages; i++) {
> > > > > > > > > struct page *page = dic->cpages[i];
> > > > > > > >
> > > > > > > > por_fsstress still hang in this line?
> > > > > > >
> > > > > > > I'm stuck on testing the patches, since the latest kernel is panicking somehow.
> > > > > > > Let me update later, once I can test a bit. :(
> > > > > >
> > > > > > It seems this works without error.
> > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev&id=4e6e1364dccba80ed44925870b97fbcf989b96c9
> > > > >
> > > > > Ah, good news.
> > > > >
> > > > > Thanks for helping to test the patch. :)
> > > >
> > > > Hmm, I hit this again. Let me check w/o compress_cache back. :(
> > >
> > > Oops :(
> >
> > Ok, apprantely that panic is caused by compress_cache. The test is running over
> > 24hours w/o it.
>
> Jaegeuk,
>
> I'm still struggling troubleshooting this issue.
>
> However, I failed again to reproduce this bug, I doubt the reason may be
> my test script and environment(device type/size) is different from yours.
> (btw, I used pmem as back-end device, and test w/ all fault injection
> points and w/o write_io/checkpoint fault injection points)
>
> Could you please share me your run.sh script? and test command?
>
> And I'd like to ask what's your device type and size?

I'm using qemu with 16GB with this script.
https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh

./run.sh por_fsstress

>
> Thanks,
>
> > .
> >