RE: sync_set_bit() vs set_bit() -- what's the difference?

From: KY Srinivasan
Date: Wed Aug 27 2014 - 09:56:34 EST




> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Wednesday, August 27, 2014 12:39 AM
> To: Dexuan Cui
> Cc: jeremy@xxxxxxxx; KY Srinivasan; chrisw@xxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx
> Subject: Re: sync_set_bit() vs set_bit() -- what's the difference?
>
> >>> On 27.08.14 at 09:30, <decui@xxxxxxxxxxxxx> wrote:
> > I'm curious about the difference. :-)
> >
> > sync_set_bit() is only used in drivers/hv/ and drivers/xen/ while
> > set_bit() is used in all other places. What makes hv/xen special?
>
> I guess this would really want to be used by anything communicating with a
> hypervisor or a remote driver: set_bit() gets its LOCK prefix discarded when
> the local kernel determines it runs on a single CPU only. Obviously having
> knowledge of the CPU count inside a VM does not imply anything about the
> number of CPUs available to the host, i.e. stripping LOCK prefixes in that case
> would be unsafe.

That is exactly the case for Hyper-V (and Xen).

K. Y

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/