Re: [PATCH V2 4/4] cpuset,mm: update task's mems_allowed lazily
From: Nick Piggin
Date: Thu Mar 11 2010 - 06:03:30 EST
On Thu, Mar 11, 2010 at 06:33:02PM +0800, Miao Xie wrote:
> on 2010-3-11 16:15, Nick Piggin wrote:
> > On Tue, Mar 09, 2010 at 03:25:54PM +0800, Miao Xie wrote:
> >> on 2010-3-9 5:46, David Rientjes wrote:
> >> [snip]
> >>>> Considering the change of task->mems_allowed is not frequent, so in this patch,
> >>>> I use two variables as a tag to indicate whether task->mems_allowed need be
> >>>> update or not. And before setting the tag, cpuset caches the new mask of every
> >>>> task at its task_struct.
> >>>>
> >>>
> >>> So what exactly is the benefit of 58568d2 from last June that caused this
> >>> issue to begin with? It seems like this entire patchset is a revert of
> >>> that commit. So why shouldn't we just revert that one commit and then add
> >>> the locking and updating necessary for configs where
> >>> MAX_NUMNODES > BITS_PER_LONG on top?
> >>
> >> I worried about the consistency of task->mempolicy with task->mems_allowed for
> >> configs where MAX_NUMNODES <= BITS_PER_LONG.
> >>
> >> The problem that I worried is fowllowing:
> >> When the kernel allocator allocates pages for tasks, it will access task->mempolicy
> >> first and get the allowed node, then check whether that node is allowed by
> >> task->mems_allowed.
> >>
> >> But, Without this patch, ->mempolicy and ->mems_allowed is not updated at the same
> >> time. the kernel allocator may access the inconsistent information of ->mempolicy
> >> and ->mems_allowed, sush as the allocator gets the allowed node from old mempolicy,
> >> but checks whether that node is allowed by new mems_allowed which does't intersect
> >> old mempolicy.
> >>
> >> So I made this patchset.
> >
> > I like your focus on keeping the hotpath light, but it is getting a bit
> > crazy. I wonder if it wouldn't be better just to teach those places that
> > matter to retry on finding an inconsistent nodemask? The only failure
> > case to worry about is getting an empty nodemask, isn't it?
> >
>
> Ok, I try to make a new patch by using seqlock.
Well... I do think seqlocks would be a bit simpler because they don't
require this checking and synchronizing of this patch.
But you are right: on non-x86 architectures seqlocks would probably be
more costly than your patch in the fastpaths. Unless you can avoid
using the seqlock in fastpaths and just have callers handle the rare
case of an empty nodemask.
cpuset_node_allowed_*wall doesn't need anything because it is just
interested in one bit in the mask.
cpuset_mem_spread_node doesn't matter because it will loop around and
try again if it doesn't find any nodes online.
cpuset_mems_allowed seems totally broken anyway
etc.
This approach might take a little more work, but I think it might be the
best way.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/