Re: [PATCH 1b/7] dlm: core locking
From: David Teigland
Date: Thu Apr 28 2005 - 22:58:42 EST
On Thu, Apr 28, 2005 at 09:39:22AM -0700, Daniel McNeil wrote:
> Since a DLM is a distributed lock manager, its usage is entirely for
> locking some shared resource (might not be storage, might be shared
> state, shared data, etc). If the DLM can grant a lock, but not
> guarantee that other nodes (including the ones that have been kicked
> out of the cluster membership) do not have a conflicting DLM lock, then
> any applications that depend on the DLM for protection/coordination
> be in trouble. Doesn't the GFS code depend on the DLM not being
> recovered until after fencing of dead nodes?
No, it doesn't. GFS depends on _GFS_ not being recovered until failed
nodes are fenced. Recovering GFS is an entirely different thing from
recovering the DLM. GFS actually writes to shared storage.
> Is there a existing DLM that does not depend on fencing? (you said
> yours was modeled after the VMS DLM, didn't they depend on fencing?)
I've never heard of a DLM that depends on fencing.
> How would an application use a DLM that does not depend on fencing?
Go back to the definition of i/o fencing: i/o fencing simply prevents a
machine from modifying shared storage. This is often done by disabling
the victim's connection to the shared storage. Notice shared storage is
part of the definition, without it, fencing is irrelevant.
Fencing is not mainly about the node, it's mainly about the storage. When
a fencing victim is disconnected from storage, it usually means its SAN
port has been turned off on a switch. Notice that this doesn't touch or
effect the node at all -- it simply blocks any i/o from the node before it
reaches the storage.
Any distributed app using the DLM that writes only to its own local
storage will not need fencing, there's nothing to fence.
Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/