Setting up replication

With Manticore, write transactions (such as INSERT, REPLACE, DELETE, TRUNCATE, UPDATE, COMMIT) can be replicated to other cluster nodes before the transaction is fully applied on the current node. Currently, replication is supported for percolate, rt and distributed tables in Linux and macOS.

Native Windows binaries for Manticore do not support replication. We recommend installing Manticore via WSL (Windows Subsystem for Linux).

Manticore's replication is powered by the Galera library and boasts several impressive features:

  • True multi-master: read and write to any node at any time.
  • Virtually synchronous replication no slave lag and no data loss after a node crash.
  • Hot standby: no downtime during failover (since there is no failover).
  • Tightly coupled: all nodes hold the same state and no diverged data between nodes is allowed.
  • Automatic node provisioning: no need to manually backup the database and restore it on a new node.
  • Easy to use and deploy.
  • Detection and automatic eviction of unreliable nodes.
  • Certification-based replication.

To set up replication in Manticore Search:

  • The data_dir option must be set in the "searchd" section of the configuration file. Replication is not supported in plain mode.
  • A listen directive must be specified, containing an IP address accessible by other nodes, or a node_address with an accessible IP address.
  • Optionally, you can set unique values for server_id on each cluster node. If no value is set, the node will attempt to use the MAC address or a random number to generate the server_id.

If there is no replication listen directive set, Manticore will use the first two free ports in the range of 200 ports after the default protocol listening port for each created cluster. To set replication ports manually, the listen directive (of replication type) port range must be defined and the address/port range pairs must not intersect between different nodes on the same server. As a rule of thumb, the port range should specify at least two ports per cluster.

Replication cluster

A replication cluster is a group of nodes in which a write transaction is replicated. Replication is set up on a per-table basis, meaning that one table can only belong to one cluster. There is no limit on the number of tables that a cluster can have. All transactions such as INSERT, REPLACE, DELETE, TRUNCATE on any percolate or real-time table that belongs to a cluster are replicated to all the other nodes in that cluster. Distributed tables can also be part of the replication process. Replication is multi-master, so writes to any node or multiple nodes simultaneously will work just as well.

To create a cluster, you can typically use the command create cluster with CREATE CLUSTER <cluster name>, and to join a cluster, you can use join cluster with JOIN CLUSTER <cluster name> at 'host:port'. However, in some rare cases, you may want to fine-tune the behavior of CREATE/JOIN CLUSTER. The available options are:

name

This option specifies the name of the cluster. It should be unique among all the clusters in the system.

Note: The maximum allowable hostname length for the JOIN command is 253 characters. If you exceed this limit, searchd will generate an error.

path

The path option specifies the data directory for write-set cache replication and incoming tables from other nodes. This value should be unique among all the clusters in the system and should be specified as a relative path to the data_dir. directory. By default, it is set to the value of data_dir.

nodes

The nodes option is a list of address:port pairs for all the nodes in the cluster, separated by commas. This list should be obtained using the node's API interface and can include the address of the current node as well. It is used to join the node to the cluster and to rejoin it after a restart.

options

The options option allows you to pass additional options directly to the Galera replication plugin, as described in the Galera Documentation Parameters

Write statements

When working with a replication cluster, all write statements such as INSERT, REPLACE, DELETE, TRUNCATE, UPDATE that modify the content of a cluster's table must use thecluster_name:table_name expression instead of the table name. This ensures that the changes are propagated to all replicas in the cluster. If the correct expression is not used, an error will be triggered.

In the JSON interface, the cluster property must be set along with the table name for all write statements to a cluster's table. Failure to set the cluster property will result in an error.

The Auto ID for a table in a cluster should be valid as long as the server_id is correctly configured.

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • Javascript
  • Java
  • C#
📋
INSERT INTO posts:weekly_index VALUES ( 'iphone case' )
TRUNCATE RTINDEX click_query:weekly_index
UPDATE INTO posts:rt_tags SET tags=(101, 302, 304) WHERE MATCH ('use') AND id IN (1,101,201)
DELETE FROM clicks:rt WHERE MATCH ('dumy') AND gid>206

Read statements

Read statements such as SELECT, CALL PQ, DESCRIBE can either use regular table names that are not prepended with a cluster name, or they can use the cluster_name:table_nameformat. If the latter is used, the cluster_name component is ignored.

When using the HTTP endpoint json/search, the cluster property can be specified if desired, but it can also be omitted.

‹›
  • SQL
  • JSON
📋
SELECT * FROM weekly_index
CALL PQ('posts:weekly_index', 'document is here')

Cluster parameters

Replication plugin options can be adjusted using the SET statement.

A list of available options can be found in the Galera Documentation Parameters .

‹›
  • SQL
  • JSON
📋
SET CLUSTER click_query GLOBAL 'pc.bootstrap' = 1

Cluster with diverged nodes

It's possible for replicated nodes to diverge from one another, leading to a state where all nodes are labeled as non-primary. This can occur as a result of a network split between nodes, a cluster crash, or if the replication plugin experiences an exception when determining the primary component. In such a scenario, it's necessary to select a node and promote it to the role of primary component.

To identify the node that needs to be promoted, you should compare the last_committed cluster status variable value on all nodes. If all the servers are currently running, there's no need to restart the cluster. Instead, you can simply promote the node with the highest last_committed value to the primary component using the SET statement (as demonstrated in the example).

The other nodes will then reconnect to the primary component and resynchronize their data based on this node.

‹›
  • SQL
  • JSON
📋
SET CLUSTER posts GLOBAL 'pc.bootstrap' = 1

Replication and cluster

To use replication, you need to define one listen port for SphinxAPI protocol and one listen for replication address and port range in the configuration file. Also, specify the data_dir folder to receive incoming tables.

‹›
  • ini
ini
📋
searchd {
  listen   = 9312
  listen   = 192.168.1.101:9360-9370:replication
  data_dir = /var/lib/manticore/
  ...
 }

To replicate tables, you must create a cluster on the server that has the local tables to be replicated.

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • Javascript
  • Java
  • C#
📋
CREATE CLUSTER posts

Add these local tables to the cluster

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • Javascript
  • Java
  • C#
📋
ALTER CLUSTER posts ADD pq_title
ALTER CLUSTER posts ADD pq_clicks

All other nodes that wish to receive a replica of the cluster's tables should join the cluster as follows:

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • Javascript
  • Java
  • C#
📋
JOIN CLUSTER posts AT '192.168.1.101:9312'

When running queries, prepend the table name with the cluster name posts: or use the cluster property for HTTP request object.

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • Javascript
  • Java
  • C#
📋
INSERT INTO posts:pq_title VALUES ( 3, 'test me' )

All queries that modify tables in the cluster are now replicated to all nodes in the cluster.

Creating a replication cluster

To create a replication cluster, you must set its name at a minimum.

If you are creating a single cluster or the first cluster, you may omit the path option. In this case, the data_dir option will be used as the cluster path. However, for all subsequent clusters, you must specify the path and the path must be available. The nodes option may also be set to list all nodes in the cluster.

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • javascript
  • Java
  • C#
📋
CREATE CLUSTER posts
CREATE CLUSTER click_query '/var/data/click_query/' as path
CREATE CLUSTER click_query '/var/data/click_query/' as path, 'clicks_mirror1:9312,clicks_mirror2:9312,clicks_mirror3:9312' as nodes

If the nodes option is not specified when creating a cluster, the first node that joins the cluster will be saved as the nodes option.

Joining a replication cluster

To join an existing cluster, you must specify at least:

  • The nameof the cluster
  • The host:port of another node in the cluster you are joining
‹›
  • SQL
  • JSON
  • PHP
  • Python
  • javascript
  • Java
  • C#
📋
JOIN CLUSTER posts AT '10.12.1.35:9312'
‹›
Response
{u'error': u'', u'total': 0, u'warning': u''}

In most cases, the above is sufficient when there is a single replication cluster. However, if you are creating multiple replication clusters, you must also set the path and ensure that the directory is available.

‹›
  • SQL
SQL
📋
JOIN CLUSTER c2 at '127.0.0.1:10201' 'c2' as path

A node joins a cluster by obtaining data from a specified node and, if successful, updates the node lists across all other cluster nodes in the same way as if it was done manually through ALTER CLUSTER ... UPDATE nodes. This list is used to re-join nodes to the cluster upon restart.

There are two lists of nodes: 1.cluster_<name>_nodes_set: used to re-join nodes to the cluster upon restart. It is updated across all nodes in the same way as ALTER CLUSTER ... UPDATE nodes does. JOIN CLUSTER command performs this update automatically. The Cluster status displays this list as cluster_<name>_nodes_set. 2. cluster_<name>_nodes_view: this list contains all active nodes used for replication and does not require manual management. ALTER CLUSTER ... UPDATE nodes actually copies this list of nodes to the list of nodes used to re-join upon restart. The Cluster status displays this list as cluster_<name>_nodes_view.

When nodes are located in different network segments or data centers, the nodes option may be set explicitly. This minimizes traffic between nodes and utilizes gateway nodes for intercommunication between data centers. The following code joins an existing cluster using the nodes option.

Note: The cluster cluster_<name>_nodes_set list is not updated automatically when this syntax is used. To update it, use ALTER CLUSTER ... UPDATE nodes.

‹›
  • SQL
  • JSON
  • PHP
  • Python
  • javascript
  • Java
  • C#
📋
JOIN CLUSTER click_query 'clicks_mirror1:9312;clicks_mirror2:9312;clicks_mirror3:9312' as nodes
‹›
Response
{u'error': u'', u'total': 0, u'warning': u''}

The JOIN CLUSTER command works synchronously and completes as soon as the node receives all data from the other nodes in the cluster and is in sync with them.