The ALTER CLUSTER <cluster_name> UPDATE nodes
statement updates the node lists on each node within the specified cluster to include all active nodes in the cluster. For more information on node lists, see Joining a cluster.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
ALTER CLUSTER posts UPDATE nodes
{u'error': u'', u'total': 0, u'warning': u''}
For instance, when the cluster was initially established, the list of nodes used to rejoin the cluster was 10.10.0.1:9312,10.10.1.1:9312
. Since then, other nodes joined the cluster and now the active nodes are 10.10.0.1:9312,10.10.1.1:9312,10.15.0.1:9312,10.15.0.3:9312
.However, the list of nodes used to rejoin the cluster has not been updated.
To rectify this, you can run the ALTER CLUSTER ... UPDATE nodes
statement to copy the list of active nodes to the list of nodes used to rejoin the cluster. After this, the list of nodes used to rejoin the cluster will include all the active nodes in the cluster.
Both lists of nodes can be viewed using the Cluster status statement (cluster_post_nodes_set
and cluster_post_nodes_view
).
To remove a node from the replication cluster, follow these steps:
- Stop the node
- Remove the information about the cluster from
<data_dir>/manticore.json
(usually/var/lib/manticore/manticore.json
) on the node that has been stopped. - Run
ALTER CLUSTER cluster_name UPDATE nodes
on any other node.
After these steps, the other nodes will forget about the detached node and the detached node will forget about the cluster. This action will not impact the tables in the cluster or on the detached node.
You can view the cluster status information by checking the node status. This can be done using the Node status command, which displays various information about the node, including the cluster status variables.
The output format for the cluster status variables is as follows: cluster_name_variable_name
variable_value
. Most of the variables are described in the Galera Documentation Status Variables. In addition to these variables, Manticore Search also displays:
- cluster_name - the name of the cluster, as defined in the replication setup
- node_state - the current state of the node:
closed
,destroyed
,joining
,donor
,synced
- indexes_count - the number of tables managed by the cluster
- indexes - a list of table names managed by the cluster
- nodes_set - the list of nodes in the cluster defined using the
CREATE
,JOIN
orALTER UPDATE
commands - nodes_view - the actual list of nodes in the cluster that the current node can see.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
SHOW STATUS
+----------------------------+-------------------------------------------------------------------------------------+
| Counter | Value |
+----------------------------+-------------------------------------------------------------------------------------+
| cluster_name | post |
| cluster_post_state_uuid | fba97c45-36df-11e9-a84e-eb09d14b8ea7 |
| cluster_post_conf_id | 1 |
| cluster_post_status | primary |
| cluster_post_size | 5 |
| cluster_post_local_index | 0 |
| cluster_post_node_state | synced |
| cluster_post_indexes_count | 2 |
| cluster_post_indexes | pq1,pq_posts |
| cluster_post_nodes_set | 10.10.0.1:9312 |
| cluster_post_nodes_view | 10.10.0.1:9312,10.10.0.1:9320:replication,10.10.1.1:9312,10.10.1.1:9320:replication |
In a multi-master replication cluster, a reference point must be established before other nodes can join and form the cluster. This is called cluster bootstrapping and involves starting a single node as the primary component
. Restarting a single node or reconnecting after a shutdown can be done normally.
In case of a full cluster shutdown, the server that was stopped last should be started first with the --new-cluster
command line option or by running manticore_new_cluster
through systemd. To ensure that the server is capable of being the reference point, the grastate.dat
file located at the cluster path should be updated with a value of 1 for the safe_to_bootstrap
option. Both conditions, --new-cluster
and safe_to_bootstrap=1
, must be met. If any other node is started without these options set, an error will occur. The --new-cluster-force
command line option can be used to override this protection and start the cluster from another server forcibly. Alternatively, you can run manticore_new_cluster --force
to use systemd.
In the event of a hard crash or an unclean shutdown of all servers in the cluster, the most advanced node with the largest seqno
in the grastate.dat
file located at the cluster path must be identified and started with the --new-cluster-force
command line key.