Re: [PATCH 2/2] mm, memory_hotplug: remove timeout from __offline_memory

From: Michal Hocko
Date: Tue Sep 05 2017 - 03:23:19 EST


On Tue 05-09-17 11:16:57, Anshuman Khandual wrote:
> On 09/04/2017 02:45 PM, Michal Hocko wrote:
> > On Mon 04-09-17 17:05:15, Xishi Qiu wrote:
> >> On 2017/9/4 17:01, Michal Hocko wrote:
> >>
> >>> On Mon 04-09-17 16:58:30, Xishi Qiu wrote:
> >>>> On 2017/9/4 16:21, Michal Hocko wrote:
> >>>>
> >>>>> From: Michal Hocko <mhocko@xxxxxxxx>
> >>>>>
> >>>>> We have a hardcoded 120s timeout after which the memory offline fails
> >>>>> basically since the hot remove has been introduced. This is essentially
> >>>>> a policy implemented in the kernel. Moreover there is no way to adjust
> >>>>> the timeout and so we are sometimes facing memory offline failures if
> >>>>> the system is under a heavy memory pressure or very intensive CPU
> >>>>> workload on large machines.
> >>>>>
> >>>>> It is not very clear what purpose the timeout actually serves. The
> >>>>> offline operation is interruptible by a signal so if userspace wants
> >>>> Hi Michal,
> >>>>
> >>>> If the user know what he should do if migration for a long time,
> >>>> it is OK, but I don't think all the users know this operation
> >>>> (e.g. ctrl + c) and the affect.
> >>> How is this operation any different from other potentially long
> >>> interruptible syscalls?
> >>>
> >> Hi Michal,
> >>
> >> I means the user should stop it by himself if migration always retry in endless.
> > If the memory is migrateable then the migration should finish
> > eventually. It can take some time but it shouldn't be an endless loop.
>
> But what if some how the temporary condition (page removed from the PCP
> LRU list and has not been freed yet to the buddy) happens again and again.

How would that happen? We have all pages in the range MIGRATE_ISOLATE so
no pages will get reallocated and we know that there are no unmigratable
pages in the range. So we only should have temporary failures for
migration. If that is not the case then we have a bug somewhere.

> I understand we have schedule() and yield() to make sure that the context
> does not hold the CPU for ever but it can take theoretically very long
> time if not endless to finish. In that case sending signal to the user

I guess you meant to say signal from the user space...

> space process who initiated the offline request is the only way to stop
> this retry loop. I think this is still a better approach than the 120
> second timeout which was kind of arbitrary.

Yeah the context is interruptible so if the operation takes unbearably
too long then a watchdog can be setup trivially and to the user defined
value. There is a good reason we do not add hardocded timeouts to the
kernel.
--
Michal Hocko
SUSE Labs