Class: DLMServer

Clusterluck.DLMServer(gossip, kernel, optsopt)

new DLMServer(gossip, kernel, optsopt)

Distributed lock manager server, implemented using the Redlock algorithm https://redis.io/topics/distlock.

Parameters:
Name Type Attributes Description
gossip Clusterluck.GossipRing

Gossip ring to coordinate ring state from.

kernel Clusterluck.NetKernel

Network kernel to communicate with other nodes.

opts Object <optional>

Options object containing information about read/write quorums, disk persistence options, and wait times for retry logic on lock requests.

Properties
Name Type Attributes Description
rquorum Number <optional>

Quorum for read lock requests.

wquorum Number <optional>

Quorum for write lock requests.

rfactor Number <optional>

Replication factor for number of nodes to involve in a quorum.

minWaitTimeout Number <optional>

Minimum amount of time in milliseconds to wait for a retry on a locking request.

maxWaitTimeout Number <optional>

Maximum amount of time in milliseconds to wait for a retry on a locking request.

disk Boolean <optional>

Whether to persist lock state to disk. If true is passed, the following options will be read.

path String <optional>

Path for underlying DTable instance to flush state to.

writeThreshold Number <optional>

Write threshold of underlying DTable instance.

autoSave Number <optional>

Autosave interval of underlying DTable instance.

fsyncInterval Number <optional>

Fsync interval of underlying DTable instance.

compress Boolean <optional>

Whether to feed RDB snapshot streams through a GZIP compression stream for the underlying DTable instance.

name String <optional>

Name of underlying DTable to write to.

Source:

Methods

(static) calculateWaitTime(min, max) → {Number}

Calculates wait time for retry functionality in rlock and wlock requests.

Parameters:
Name Type Description
min Number

Minimum wait time.

max Number

Maximum wait time.

Source:
Returns:

Amount of time to wait.

Type
Number

(static) findLockPasses(nodes, data) → {Array}

Returns the set of nodes with successful responses according to data, where each node in nodes has a corresponding response in data according to index.

Parameters:
Name Type Description
nodes Array

Array of nodes.

data Array

Array of responses, with a 1-1 correspondance to nodes based on index.

Source:
Returns:

Array of nodes with successful responses.

Type
Array

(static) parseJob(job, command) → {Object|Error}

Parse and validate job for correct structure and type adherence.

Parameters:
Name Type Description
job Object

Job to parse/validate.

command String

Command job corresponds to. This determines which set of static type definitions to validate job against.

Source:
Returns:

An object if successfully parsed/validated, otherwise an Error indicating the reason for failure.

Type
Object | Error

decodeJob(buf) → {Object|Error}

Parses a fully memoized message stream into an object containing a key/value pair. If we fail to parse the job buffer (invalid JSON, etc), we just return an error and this GenServer will skip emitting an event. Otherwise, triggers user-defined logic for the parsed event.

Parameters:
Name Type Description
buf Buffer

Memoized buffer that represents complete message stream.

Source:
Returns:

Object containing an event and data key/value pair, which are used to emit an event for user-defined logic.

Type
Object | Error

decodeSingleton(data) → {Object|Error}

Parses a singleton message stream into an object containing a key/value pair. If we fail to parse the job object (invalid format for given event value, etc.), we just return an error and this GenServer will skip emitting an event. Otherwise, triggers user-defined logic for the parsed event.

Parameters:
Name Type Description
data Object

Message to be processed with event and data parameters.

Source:
Returns:

Object containing an event and data key/value pair, which are used to emit an event for user-defined logic.

Type
Object | Error

idle() → {Boolean}

Returns whether this instance is idle or not. Checks for both active requests as well as the underlying table's state for idleness.

Source:
Returns:

Whether this instance is idle or not.

Type
Boolean

load(cb)

Loads state from disk for the underlying table this instance uses for state persistence. If the disk option is set to false on construction, this function will immediately return and call cb. NOTE: this function should be called after start is called, as the underlying table needs to be started before any files can be read from disk.

Parameters:
Name Type Description
cb function

Function of the form function (err) {...}, where err will be passed if an error occurs loading state from disk.

Source:

rlock(id, holder, timeout, cb, reqTimeoutopt, retriesopt)

Makes a read lock request against id with holder identifying the requester of this lock (think actor). holder should be a randomly generated string if looking for different requests to represent different actions, such as a UUID or the result of a crypto.randomBytes call. The lock will last for timeout milliseconds before being automatically released on the other nodes this lock routes to. The algorithm consists of:

  • Use the internal gossip server to find the set of nodes responsible for id on the hash ring.
  • Make a request to the DLM server on these other nodes to execute the read lock command.
  • Based on the responses, if a read quorum is met and the response is returned within timeout milliseconds, then the request was successful and we return the set of nodes holding this lock.
  • Otherwise, we asynchronously unlock this rlock on the successful nodes and set a random timeout to retry the request. If we've retried retries number of times, then an error is returned and retry logic ceases.

The main difference between read locks and write locks is that write locks enforce exclusivity (they're equivalent to mutexes). Read locks, conversely, allow concurrency of other read lock requests.

Parameters:
Name Type Attributes Description
id String

ID of resource to lock.

holder String

ID of actor/requester for this lock.

timeout Number

How long the lock will last on each node holding this lock, in milliseconds.

cb function

Function of form function (err, nodes) {...}, where nodes is the array of nodes holding this lock. err is null if the request is successful, or an Error object otherwise.

reqTimeout Number <optional>

Amount of time, in milliseconds, to wait for a lock attempt before considering the request errored. Defaults to Infinity.

retries Number <optional>

Number of times to retry this request. Defaults to Infinity.

Source:

runlock(id, holder, cb, reqTimeoutopt)

Unlocks the read lock id with holder holder. If the request takes longer than reqTimeout, cb is called with a timeout error. Otherwise, cb is called with no arguments. The algorithm consists of:

  • Use the internal gossip server to find the set of nodes responsible for id on the hash ring.
  • Make a request to the DLM server on these other nodes to execute the read unlock command.
  • If an error is returned, call cb with that error.
  • Otherwise, call cb with no arguments.
Parameters:
Name Type Attributes Description
id String

ID of resource to unlock.

holder String

ID of actor/requester for this lock.

cb function

Function of form function (err) {...}, where err is null if the request is successful, or an Error object otherwise.

reqTimeout Number <optional>

Amount of time, in milliseconds, to wait for an unlock attempt before considering the request errored. Defaults to Infinity.

Source:

runlockAsync(id, holder)

Asynchronously unlocks the read lock id with holder holder. The algorithm consists of:

  • Use the internal gossip server to find the set of nodes responsible for id on the hash ring.
  • Make an asynchronous request to the DLM server on these other nodes to execute the read unlock command.
Parameters:
Name Type Description
id String

ID of resource to unlock.

holder String

ID of actor/requester for this lock.

Source:

start(nameopt) → {Clusterluck.DLMServer}

Starts a DLM handler: listens for events related to lock and unlock requests on the netkernel. Also starts the underlying table storing locks and lock holders.

Parameters:
Name Type Attributes Description
name String <optional>

Name to register this handler with instead of the unique id attached. Any message received on the network kernwl with id name will be routed to this instance for message stream parsing, and possible event firing.

Source:
Returns:

This instance.

Type
Clusterluck.DLMServer

stop() → {Clusterluck.DLMServer}

Stops this handler. If the table is idle, this function will transition into clearing all locks and table state, and stopping the underlying table. Otherwise, this function will wait to complete until this instance is in an idle state.

Source:
Returns:

This instance.

Type
Clusterluck.DLMServer

wlock(id, holder, timeout, cb, reqTimeoutopt, retriesopt)

Makes a write lock request against id with holder identifying the requester of this lock (think actor). holder should be a randomly generated string if looking for different requests to represent different actions, such as a UUID or the result of a crypto.randomBytes call. The lock will last for timeout milliseconds before being automatically released on the other nodes this lock routes to. The algorithm consists of:

  • Use the internal gossip server to find the set of nodes responsible for id on the hash ring.
  • Make a request to the DLM server on these other nodes to execute the write lock command.
  • Based on the responses, if a write quorum is met and the response is returned within timeout milliseconds, then the request was successful and we return the set of nodes holding this lock.
  • Otherwise, we asynchronously unlock this wlock on the successful nodes and set a random timeout to retry the request. If we've retried retries number of times, then an error is returned and retry logic ceases.

The main difference between read locks and write locks is that write locks enforce exclusivity (they're equivalent to mutexes). Read locks, conversely, allow concurrency of other read lock requests.

Parameters:
Name Type Attributes Description
id String

ID of resource to lock.

holder String

ID of actor/requester for this lock.

timeout Number

How long the lock will last on each node holding this lock, in milliseconds.

cb function

Function of form function (err, nodes) {...}, where nodes is the array of nodes holding this lock. err is null if the request is successful, or an Error object otherwise.

reqTimeout Number <optional>

Amount of time, in milliseconds, to wait for a lock attempt before considering the request errored. Defaults to Infinity.

retries Number <optional>

Number of times to retry this request. Defaults to Infinity.

Source:

wunlock(id, holder, cb, reqTimeoutopt)

Unlocks the write lock id with holder holder. If the request takes longer than reqTimeout, cb is called with a timeout error. Otherwise, cb is called with no arguments. The algorithm consists of:

  • Use the internal gossip server to find the set of nodes responsible for id on the hash ring.
  • Make a request to the DLM server on these other nodes to execute the write unlock command.
  • If an error is returned, call cb with that error.
  • Otherwise, call cb with no arguments.
Parameters:
Name Type Attributes Description
id String

ID of resource to unlock.

holder String

ID of actor/requester for this lock.

cb function

Function of form function (err) {...}, where err is null if the request is successful, or an Error object otherwise.

reqTimeout Number <optional>

Amount of time, in milliseconds, to wait for an unlock attempt before considering the request errored. Defaults to Infinity.

Source:

wunlockAsync(id, holder)

Asynchronously unlocks the write lock id with holder holder. The algorithm consists of:

  • Use the internal gossip server to find the set of nodes responsible for id on the hash ring.
  • Make an asynchronous request to the DLM server on these other nodes to execute the write unlock command.
Parameters:
Name Type Description
id String

ID of resource to unlock.

holder String

ID of actor/requester for this lock.

Source: