Re: [PATCH 4/4] cpuset,mm: use rwlock to protect task->mempolicyand mems_allowed
From: Nick Piggin
Date: Thu Mar 11 2010 - 00:31:26 EST
On Thu, Mar 11, 2010 at 01:04:33PM +0800, Miao Xie wrote:
> on 2010-3-10 3:42, Paul Menage wrote:
> > On Sat, Mar 6, 2010 at 6:33 PM, Miao Xie <miaox@xxxxxxxxxxxxxx> wrote:
> >>
> >> Before applying this patch, cpuset updates task->mems_allowed just like
> >> what you said. But the allocator is still likely to see an empty nodemask.
> >> This problem have been pointed out by Nick Piggin.
> >>
> >> The problem is following:
> >> The size of nodemask_t is greater than the size of long integer, so loading
> >> and storing of nodemask_t are not atomic operations. If task->mems_allowed
> >> don't intersect with new_mask, such as the first word of the mask is empty
> >> and only the first word of new_mask is not empty. When the allocator
> >> loads a word of the mask before
> >>
> >> current->mems_allowed |= new_mask;
> >>
> >> and then loads another word of the mask after
> >>
> >> current->mems_allowed = new_mask;
> >>
> >> the allocator gets an empty nodemask.
> >
> > Couldn't that be solved by having the reader read the nodemask twice
> > and compare them? In the normal case there's no race, so the second
> > read is straight from L1 cache and is very cheap. In the unlikely case
> > of a race, the reader would keep trying until it got two consistent
> > values in a row.
>
> I think this method can't fix the problem because we can guarantee the second
> read is after the update of mask completes.
Any problem with using a seqlock?
The other thing you could do is store a pointer to the nodemask, and
allocate a new nodemask when changing it, issue a smp_wmb(), and then
store the new pointer. Read side only needs a smp_read_barrier_depends()
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/