Re: [PATCH -v5 0/9] migrate_pages(): batch TLB flushing
From: Huang, Ying
Date: Mon Feb 20 2023 - 04:29:09 EST
Hi, Hugh,
Hugh Dickins <hughd@xxxxxxxxxx> writes:
> On Mon, 13 Feb 2023, Huang Ying wrote:
>
>> From: "Huang, Ying" <ying.huang@xxxxxxxxx>
>>
>> Now, migrate_pages() migrate folios one by one, like the fake code as
>> follows,
>>
>> for each folio
>> unmap
>> flush TLB
>> copy
>> restore map
>>
>> If multiple folios are passed to migrate_pages(), there are
>> opportunities to batch the TLB flushing and copying. That is, we can
>> change the code to something as follows,
>>
>> for each folio
>> unmap
>> for each folio
>> flush TLB
>> for each folio
>> copy
>> for each folio
>> restore map
>>
>> The total number of TLB flushing IPI can be reduced considerably. And
>> we may use some hardware accelerator such as DSA to accelerate the
>> folio copying.
>>
>> So in this patch, we refactor the migrate_pages() implementation and
>> implement the TLB flushing batching. Base on this, hardware
>> accelerated folio copying can be implemented.
>>
>> If too many folios are passed to migrate_pages(), in the naive batched
>> implementation, we may unmap too many folios at the same time. The
>> possibility for a task to wait for the migrated folios to be mapped
>> again increases. So the latency may be hurt. To deal with this
>> issue, the max number of folios be unmapped in batch is restricted to
>> no more than HPAGE_PMD_NR in the unit of page. That is, the influence
>> is at the same level of THP migration.
>>
>> We use the following test to measure the performance impact of the
>> patchset,
>>
>> On a 2-socket Intel server,
>>
>> - Run pmbench memory accessing benchmark
>>
>> - Run `migratepages` to migrate pages of pmbench between node 0 and
>> node 1 back and forth.
>>
>> With the patch, the TLB flushing IPI reduces 99.1% during the test and
>> the number of pages migrated successfully per second increases 291.7%.
>>
>> Xin Hao helped to test the patchset on an ARM64 server with 128 cores,
>> 2 NUMA nodes. Test results show that the page migration performance
>> increases up to 78%.
>>
>> This patchset is based on mm-unstable 2023-02-10.
>
> And back in linux-next this week: I tried next-20230217 overnight.
>
> There is a deadlock in this patchset (and in previous versions: sorry
> it's taken me so long to report), but I think one that's easily solved.
>
> I've not bisected to precisely which patch (load can take several hours
> to hit the deadlock), but it doesn't really matter, and I expect that
> you can guess.
>
> My root and home filesystems are ext4 (4kB blocks with 4kB PAGE_SIZE),
> and so is the filesystem I'm testing, ext4 on /dev/loop0 on tmpfs.
> So, plenty of ext4 page cache and buffer_heads.
>
> Again and again, the deadlock is seen with buffer_migrate_folio_norefs(),
> either in kcompactd0 or in khugepaged trying to compact, or in both:
> it ends up calling __lock_buffer(), and that schedules away, waiting
> forever to get BH_lock. I have not identified who is holding BH_lock,
> but I imagine a jbd2 journalling thread, and presume that it wants one
> of the folio locks which migrate_pages_batch() is already holding; or
> maybe it's all more convoluted than that. Other tasks then back up
> waiting on those folio locks held in the batch.
>
> Never a problem with buffer_migrate_folio(), always with the "more
> careful" buffer_migrate_folio_norefs(). And the patch below fixes
> it for me: I've had enough hours with it now, on enough occasions,
> to be confident of that.
>
> Cc'ing Jan Kara, who knows buffer_migrate_folio_norefs() and jbd2
> very well, and I hope can assure us that there is an understandable
> deadlock here, from holding several random folio locks, then trying
> to lock buffers. Cc'ing fsdevel, because there's a risk that mm
> folk think something is safe, when it's not sufficient to cope with
> the diversity of filesystems. I hope nothing more than the below is
> needed (and I've had no other problems with the patchset: good job),
> but cannot be sure.
>
> [PATCH next] migrate_pages: fix deadlock on buffer heads
>
> When __buffer_migrate_folio() is called from buffer_migrate_folio_norefs(),
> force MIGRATE_ASYNC mode so that buffer_migrate_lock_buffers() will only
> trylock_buffer(), failing with -EAGAIN as usual if that does not succeed.
>
> Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx>
>
> --- next-20230217/mm/migrate.c
> +++ fixed/mm/migrate.c
> @@ -748,7 +748,8 @@ static int __buffer_migrate_folio(struct
> if (folio_ref_count(src) != expected_count)
> return -EAGAIN;
>
> - if (!buffer_migrate_lock_buffers(head, mode))
> + if (!buffer_migrate_lock_buffers(head,
> + check_refs ? MIGRATE_ASYNC : mode))
> return -EAGAIN;
>
> if (check_refs) {
Thank you very much for pointing this out and the fix patch. Today, my
colleague Pengfei reported a deadlock bug to me. It seems that we
cannot wait the writeback to complete when we have locked some folios.
Below patch can fix that deadlock. I don't know whether this is related
to the deadlock you run into. It appears that we should avoid to
lock/wait synchronously if we have locked more than one folios.
Best Regards,
Huang, Ying
------------------------------------8<------------------------------------