From 164a5bcb0bfcd34a465877cdc09f4c99d9038854 Mon Sep 17 00:00:00 2001
From: Guido Trotter <ultrotter@google.com>
Date: Tue, 30 Sep 2008 10:20:58 +0000
Subject: [PATCH] locking design: talk about removing locks

Reviewed-by: iustinp
---
 doc/design-2.0-locking.rst | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/doc/design-2.0-locking.rst b/doc/design-2.0-locking.rst
index 2ba839d9b..437271411 100644
--- a/doc/design-2.0-locking.rst
+++ b/doc/design-2.0-locking.rst
@@ -123,8 +123,8 @@ The API will have a way to grab one or more than one locks at the same time.
 Any attempt to grab a lock while already holding one in the wrong order will be
 checked for, and fail.
 
-Adding new locks
-~~~~~~~~~~~~~~~~
+Adding/Removing locks
+~~~~~~~~~~~~~~~~~~~~~
 
 When a new instance or a new node is created an associated lock must be added
 to the list. The relevant code will need to inform the locking library of such
@@ -132,9 +132,14 @@ a change.
 
 This needs to be compatible with every other lock in the system, especially
 metalocks that guarantee to grab sets of resources without specifying them
-explicitly.
-
-The implementation of this will be handled in the locking library itself.
+explicitly. The implementation of this will be handled in the locking library
+itself.
+
+Of course when instances or nodes disappear from the cluster the relevant locks
+must be removed. This is easier than adding new elements, as the code which
+removes them must own them exclusively or can queue for their ownership, and
+thus deals with metalocks exactly as normal code acquiring those locks. Any
+operation queueing on a removed lock will fail after its removal.
 
 Asynchronous operations
 ~~~~~~~~~~~~~~~~~~~~~~~
-- 
GitLab