HappyDoc Generated Documentation | unidist/sharedlock.py | |||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
/ unidist / sharedlock.py sharedlock Shared Locking module. Get access to shared locks for any program using this procblock module's state. NOTE(g): Locks are never stored for process restarts. If a process goes down, assume all locks are lost and must be restored. There is no way to determine the effect that the process restarting will have on a lock, so the assumption must be made that the locks have been lost. Only distributed locking can solve this, so that any given nodes loss of locks means nothing, because a quorum (N/2+1) must be reached before a new lock will be acquired, and so the restart of this lock databes node means nothing in itself, as a minimum of 3 nodes must be present for a quorum to decide on a lock. An individual node's code will restart as well, so the code will proceed as if no locks are already set, and start requesting locks. This appears to be the right way to handle this. TODO(g): Allow quorum-locking for multi-region locks. This isn't about implementation in this module, it's about creating the method for how to link a number of locks together so that inside an entire system the first to get the (N/2)+1 locks is the victor, and all other locks are undone and given to the victor. Other lock attempts must block until the quorum releases the lock and allows another (N/2)+1 locks to be made. Distributed lock systems allow flexibility on lock servers coming and going for large scale systems. No system can be the master in this environment and a majority quorum must be met to enforce any lock. Each lock server should attempt to gain the quorum lock before locking itself. There is a dual-layer of locking here. The attempt-lock and the actual-lock. The attempt lock is this node only, the actual-lock is when a quorum has been reached. Attempt locks are blocked by actual locks. All distributed locks must have a timeout specified so that the network does not come to a halt. To avoid dead-locks, locks must have a unique priority, so a lock name must register it's unique priority. To get a lower lock, you must attain higher locks first, and only go from low to high. This allows dead-locks to be avoided by design. Locks must be designed as a priority so a high lock is never requested before a low-lock. This can be enforced by using credentials for locks, so a node request sessions has a crendential, and the credentials are tracked, and any requests from that credential must come in order, or will be rejected due to creating a possible dead-lock scenario.
|