Write transaction (any result of INSERT
, REPLACE
, DELETE
, TRUNCATE
, UPDATE
, COMMIT
) can be replicated to other cluster nodes before the transaction is fully applied on the current node. Currently replication is supported for percolate
and rt
tables in Linux an MacOS. Manticore Search packages for Windows do not provide replication support.
Manticore's replication is based on Galera library and features the following:
- true multi-master - read and write to any node at any time
- virtually synchronous replication - no slave lag, no data is lost after a node crash
- hot standby - no downtime during failover (since there is no failover)
- tightly coupled - all the nodes hold the same state. No diverged data between nodes allowed
- automatic node provisioning - no need to manually back up the database and restore it on a new node
- easy to use and deploy
- detection and automatic eviction of unreliable nodes
- certification based replication
To use replication in Manticore Search:
- data_dir option should be set in section "searchd" of the configuration file. Replication is not supported in the plain mode
- there should be either:
- a listen directive specified (without specifying a protocol) containing an IP address accessible by other nodes
- or node_address with an accessible IP address
- optionally you can set unique values for server_id on each cluster node. If no value is set, the node will try to use the MAC address (or a random number if that fails) to generate the
server_id
.
If there is no replication
listen directive set Manticore will use the first two free ports in the range of 200 ports after the default protocol listening port for each created cluster. To set replication ports manually the listen directive (of replication
type) port range should be defined and the address/port range pairs should not intersect between different nodes on the same server. As a rule of thumb, the port range should specify no less than two ports per cluster.
Replication cluster is a set of nodes among which a write transaction gets replicated. Replication is configured on per-table basis, meaning that one table can be assigned to only one cluster. There is no restriction on how many tables a cluster can have. All transactions such as INSERT
, REPLACE
, DELETE
, TRUNCATE
in any percolate or real-time table belonging to a cluster are replicated to all the other nodes in the cluster. Replication is multi-master, so writes to any particular node or to multiple nodes simultaneously work equally well.
In most cases you create cluster with CREATE CLUSTER <cluster name>
and join cluster with JOIN CLUSTER <cluster name> at 'host:port'
, but in rare cases you may want to fine-tune the behaviour of CREATE/JOIN CLUSTER
. The options are:
Specifies cluster name. Should be unique.
Data directory for a write-set cache replication and incoming tables from other nodes. Should be unique among the other clusters in the node. Default is data_dir. Should be specified in the form of a path to an existing directory relative to the data_dir.
List of address:port pairs for all the nodes in the cluster (comma separated). Node's API interface should be used for this option. It can contain the current node's address too. This list is used to join a node to the cluster and rejoin it after restart.
Other options that are passed over directly to Galera replication plugin as described here Galera Documentation Parameters
For SQL interface all write statements such as INSERT
, REPLACE
, DELETE
, TRUNCATE
, UPDATE
that change the content of a cluster's table should use cluster_name:index_name
expression in place of a table name to make sure the change is propagated to all replicas in the cluster. An error will be triggered otherwise.
All write statements for HTTP interface to a cluster's table should set cluster
property along with table
name. An error will be triggered otherwise.
Auto ID generated for a table in a cluster should be valid as soon as server_id is not misconfigured.
- SQL
- JSON
- PHP
- Python
- Javascript
- Java
INSERT INTO posts:weekly_index VALUES ( 'iphone case' )
TRUNCATE RTINDEX click_query:weekly_index
UPDATE INTO posts:rt_tags SET tags=(101, 302, 304) WHERE MATCH ('use') AND id IN (1,101,201)
DELETE FROM clicks:rt WHERE MATCH ('dumy') AND gid>206
Read statements such as SELECT
, CALL PQ
, DESCRIBE
can use either regular table names not prepended with a cluster name or cluster_name:index_name
. In this case cluster_name
is just ignored.
In HTTP endpoint json/search
you can specify cluster
property if you like, but can also omit it.
- SQL
- JSON
SELECT * FROM weekly_index
CALL PQ('posts:weekly_index', 'document is here')
Replication plugin options can be changed using SET
statement (see the example).
See Galera Documentation Parameters for a list of available options.
- SQL
- JSON
SET CLUSTER click_query GLOBAL 'pc.bootstrap' = 1
Sometimes replicated nodes can diverge from each other. The state of all the nodes might turn into non-primary
due to a network split between nodes, a cluster crash, or if the replication plugin hits an exception when determining the primary component
. Then it's necessary to select a node and promote it to the primary component
.
To determine which node needs to be a reference, compare the last_committed
cluster status variable value on all nodes. If all the servers are already running there's no need to start the cluster again. You just need to promote the most advanced node to the primary component
with SET
statement (see the example).
All other nodes will reconnect to the node and resync their data based on this node.
- SQL
- JSON
SET CLUSTER posts GLOBAL 'pc.bootstrap' = 1
To use replication define one listen port for SphinxAPI protocol and one listen for replication address and port range in the config. Define data_dir folder for incoming tables.
- ini
searchd {
listen = 9312
listen = 192.168.1.101:9360-9370:replication
data_dir = /var/lib/manticore/
...
}
Create a cluster at the server that has local tables that need to be replicated
- SQL
- JSON
- PHP
- Python
- Javascript
- Java
CREATE CLUSTER posts
Add these local tables to cluster
- SQL
- JSON
- PHP
- Python
- Javascript
- Java
ALTER CLUSTER posts ADD pq_title
ALTER CLUSTER posts ADD pq_clicks
All other nodes that want replica of cluster's tables should join cluster as
- SQL
- JSON
- PHP
- Python
- Javascript
- Java
JOIN CLUSTER posts AT '192.168.1.101:9312'
When running queries for SQL prepend the table name with the cluster name posts:
or use cluster
property for HTTP request object.
- SQL
- JSON
- PHP
- Python
- Javascript
- Java
INSERT INTO posts:pq_title VALUES ( 3, 'test me' )
Now all such queries that modify tables in the cluster are replicated to all nodes in the cluster.
To create a replication cluster you should set at least its name.
In case of a single cluster or if you are creating the first cluster, path option may be omitted, in this case data_dir option will be used as the cluster path. For all subsequent clusters you need to specify path and this path should be available. nodes option may be also set to enumerate all the nodes in the cluster.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CREATE CLUSTER posts
CREATE CLUSTER click_query '/var/data/click_query/' as path
CREATE CLUSTER click_query '/var/data/click_query/' as path, 'clicks_mirror1:9312,clicks_mirror2:9312,clicks_mirror3:9312' as nodes
If a cluster is created without the nodes option, the first node that gets joined to the cluster will be saved as nodes.
To join an existing cluster you should specify at least:
- name
- and
host:port
of another working node of the cluster you are joining
- SQL
- JSON
- PHP
- Python
- javascript
- Java
JOIN CLUSTER posts AT '10.12.1.35:9312'
{u'error': u'', u'total': 0, u'warning': u''}
In case of a single replication cluster, i.e. in most cases the above is just enough. In case you are creating multiple replication clusters path needs to be set as well and the directory should be available.
- SQL
JOIN CLUSTER c2 at '127.0.0.1:10201' 'c2' as path
A node joins a cluster by getting data from another specified node and, if successful, it updates node lists in all other cluster nodes similar to how it's done manually via ALTER CLUSTER ... UPDATE nodes. This list is used to rejoin nodes to the cluster on restart.
There are two lists of nodes:
cluster_<name>_nodes_set
: used to rejoin nodes to the cluster on restart, it is updated across all nodes same way as ALTER CLUSTER ... UPDATE nodes does.JOIN CLUSTER
does the same update automatically. Cluster status shows this list ascluster_<name>_nodes_set
.cluster_<name>_nodes_view
: list of all active nodes used for replication. This list doesn't require manual management. ALTER CLUSTER ... UPDATE nodes actually copies this list of nodes to the list of nodes used to rejoin on restart. Cluster status shows this list ascluster_<name>_nodes_view
.
When nodes are located in different network segments or in different datacenters, nodes option may be set explicitly. That allows to minimize traffic between nodes and to use gateway nodes for datacenters intercommunication. The following command joins an existing cluster using the nodes option.
Note: that when this syntax is used,
cluster_<name>_nodes_set
list is not updated automatically. Use ALTER CLUSTER ... UPDATE nodes to update it.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
JOIN CLUSTER click_query 'clicks_mirror1:9312;clicks_mirror2:9312;clicks_mirror3:9312' as nodes
{u'error': u'', u'total': 0, u'warning': u''}
JOIN CLUSTER
works synchronously and completes as soon as the node receives all data from the other nodes in the cluster and is in sync with them.