FLUSH RTINDEX rtindex
FLUSH RTINDEX forcibly flushes RT index RAM chunk contents to disk.
Backing up an RT index is as simple as copying over its data files, followed by the binary log. However, recovering from that backup means that all the transactions in the log since the last successful RAM chunk write would need to be replayed. Those writes normally happen either on a clean shutdown, or periodically with a (big enough!) interval between writes specified in rt_flush_period directive. So such a backup made at an arbitrary point in time just might end up with way too much binary log data to replay.
FLUSH RTINDEX forcibly writes the RAM chunk contents to disk, and also causes the subsequent cleanup of (now redundant) binary log files. Thus, recovering from a backup made just after
FLUSH RTINDEX should be almost instant.
FLUSH RTINDEX rt;
Query OK, 0 rows affected (0.05 sec)
Over time, RT indexes can become fragmented into many disk chunks and/or tainted with deleted, but unpurged data, impacting search performance. When that happens, they can be optimized. Basically, the optimization pass merges together disk chunks pairs, purging off documents suppressed previously by DELETEs.
Starting Manticore 4 it happens automaticaly by default, but you can also use the below commands to force index compaction.
OPTIMIZE INDEX index_name [OPTION opt_name = opt_value [,...]]
OPTIMIZE statement enqueues an RT index for optimization in a background thread.
OPTIMIZE INDEX rt;
OPTIMIZE merges the RT index's disk chunks down to the number which equals to
# of CPU cores * 2 by default. The number of optimized disk chunks can be controlled with option
OPTIMIZE INDEX rt OPTION cutoff=4;
OPTION sync=1 is used (0 by default), the command will wait until the optimization process is done (in case the connection interrupts the optimization will continue to run on the server).
OPTIMIZE INDEX rt OPTION sync=1;
Optimize can be a lengthy and IO intensive process, so to limit the impact, all the actual merge work is executed serially in a special background thread, and the
OPTIMIZE statement simply adds a job to its queue. Currently, there is no way to check the index or queue status (that might be added in the future to the
SHOW INDEX STATUS and
SHOW STATUS statements respectively). The optimization thread can be IO-throttled, you can control the maximum number of IOs per second and the maximum IO size with rt_merge_iops and rt_merge_maxiosize directives respectively.
The RT index being optimized stays online and available for both searching and updates at (almost) all times during the optimization. It gets locked for a very short time when a pair of disk chunks is merged successfully, to rename the old and the new files, and update the index header.
As long as you don't have auto_optimize disabled indexes are optimized automatically
In case you are experiencing unexpected SSTs or want indexes across all nodes of the cluster be binary identical you need to:
- Disable auto_optimize.
- Optimize indexes manually:
On one of the nodes drop the index from the cluster:‹›
ALTER CLUSTER mycluster DROP myindex;
Optimize the index:‹›
OPTIMIZE INDEX myindex;
Add back the index to the cluster:‹›
ALTER CLUSTER mycluster ADD myindex;
When the index is added back, the new files created by the optimize process will be replicated to the other nodes in the cluster. Any changes made locally to the index on other nodes will be lost.
Index data modifications (inserts, replaces, deletes, updates) should:
- either be postponed
- or directed to the node where the optimize process is running.
Note, while the index is out of the cluster, insert/replace/delete/update commands should refer to it without cluster name prefix (for SQL statements or cluster property fin case of a HTTP JSON request), otherwise they will fail. As soon as the index is added back to the cluster, writes can be resumed. At this point write operations on the index must include the cluster name prefix again, or they will fail. Search operations are available as usual during the process on any of the nodes.
When flushing and compacting a real-time table Manticore provides isolation, so that a changed state doesn't affect the queries that were running when this or that operation started.
For instance, while compacting a table we have a pair of disk chunks that are being merged and also a new chunk produced by merging those two. Then, at one moment we create a new version of the index, where instead of the original pair of chunks the new one is placed. That is done seamlessly, so that if there's a long-running query using the original chunks, it will continue seeing the old version of the index while a new query will see the new version with the resulting merged chunk.
Same is true for flushing a RAM chunk: we merge all suitable RAM segments into a new disk chunk, and finally put a new disk chunk into the set of disk chunks and abandon the participated RAM chunk segments. During this operation, Manticore also provides isolation for those queries that started before the operation began.
Moreover, these operations are also transparent for replaces and updates. If you update an attribute in a document which belongs to a disk chunk which is being merged with another one, the update will be applied both to that chunk and to the resulting chunk after the merge. If you delete a document during a merge - it will be deleted in the original chunk and also the resulting merged chunk will either have the document marked deleted, or it will have no such document at all (if the deletion happened on early stage of the merging).