⪢ Securing and compacting a table

Backup and Restore

Backing up your tables on a regular basis is essential for recovery in the event of system crashes, hardware failure, or data corruption/loss. It's also highly recommended to make backups before upgrading to a new Manticore Search version or running ALTER TABLE.

Backing up database systems can be done in two unique ways: logical and physical backups. Each of these methods has its pros and cons, which may vary based on the specific database environment and needs. Here, we'll delve into the distinction between these two types of backups.

Logical Backups

Logical backups entail exporting the database schema and data as SQL statements or as data formats specific to the database. This backup form is typically readable by humans and can be employed to restore the database on various systems or database engines.

Pros and cons of logical backups:

  • Portability: Logical backups are generally more portable than physical backups, as they can be used to restore the database on different hardware or operating systems.
  • Flexibility: Logical backups allow you to selectively restore specific tables, indexes, or other database objects.
  • Compatibility: Logical backups can be used to migrate data between different database management systems or versions, provided the target system supports the exported format or SQL statements.
  • Slower Backup and Restore: Logical backups can be slower than physical backups, as they require the database engine to convert the data into SQL statements or another export format.
  • Increased System Load: Creating logical backups can cause higher system load, as the process requires more CPU and memory resources to process and export the data.

Manticore Search supports mysqldump for logical backups.

Physical Backups

Physical backups involve copying the raw data files and system files that comprise the database. This type of backup essentially creates a snapshot of the database's physical state at a given point in time.

Pros and cons of physical backups:

  • Speed: Physical backups are usually faster than logical backups, as they involve copying raw data files directly from disk.
  • Consistency: Physical backups ensure a consistent backup of the entire database, as all related files are copied together.
  • Lower System Load: Creating physical backups generally places less load on the system compared to logical backups, as the process does not involve additional data processing.
  • Portability: Physical backups are typically less portable than logical backups, as they may be dependent on the specific hardware, operating system, or database engine configuration.
  • Flexibility: Physical backups do not allow for the selective restoration of specific database objects, as the backup contains the entire database's raw files.
  • Compatibility: Physical backups cannot be used to migrate data between different database management systems or versions, as the raw data files may not be compatible across different platforms or software.

Manticore Search has manticore-backup command line tool for physical backups.

In summary, logical backups provide more flexibility, portability, and compatibility but can be slower and more resource-intensive, while physical backups are faster, more consistent, and less resource-intensive but may be limited in terms of portability and flexibility. The choice between these two backup methods will depend on your specific database environment, hardware, and requirements.

Using manticore-backup command line tool

The manticore-backup tool, included in the official Manticore Search packages, automates the process of backing up tables for an instance running in RT mode.

Installation

If you followed the official installation instructions, you should already have everything installed and don't need to worry. Otherwise, manticore-backup requires PHP 8.1.10 and specific modules or manticore-executor, which is a part of the manticore-extra package, and you need to ensure that one of these is available.

Note that manticore-backup is not available for Windows yet.

How to use

First, make sure you're running manticore-backup on the same server where the Manticore instance you are about to back up is running.

Second, we recommend running the tool under the root user so the tool can transfer ownership of the files you are backing up. Otherwise, a backup will be also made but with no ownership transfer. In either case, you should make sure that manticore-backup has access to the data dir of the Manticore instance.

The only required argument for manticore-backup is --backup-dir, which specifies the destination for the backup. If you don't provide any additional arguments, manticore-backup will:

  • locate a Manticore instance running with the default configuration
  • create a subdirectory in the --backup-dir directory with a timestamped name
  • backup all tables found in the instance
‹›
  • Example
Example
📋
manticore-backup --config=path/to/manticore.conf --backup-dir=backupdir
‹›
Response
Copyright (c) 2023-2024, Manticore Software LTD (https://manticoresearch.com)

Manticore config file: /etc/manticoresearch/manticore.conf
Tables to backup: all tables
Target dir: /mnt/backup/

Manticore config
  endpoint =  127.0.0.1:9308

Manticore versions:
  manticore: 5.0.2
  columnar: 1.15.4
  secondary: 1.15.4
2022-10-04 17:18:39 [Info] Starting the backup...
2022-10-04 17:18:39 [Info] Backing up config files...
2022-10-04 17:18:39 [Info]   config files - OK
2022-10-04 17:18:39 [Info] Backing up tables...
2022-10-04 17:18:39 [Info]   pq (percolate) [425B]...
2022-10-04 17:18:39 [Info]    OK
2022-10-04 17:18:39 [Info]   products (rt) [512B]...
2022-10-04 17:18:39 [Info]    OK
2022-10-04 17:18:39 [Info] Running sync
2022-10-04 17:18:42 [Info]  OK
2022-10-04 17:18:42 [Info] You can find backup here: /mnt/backup/backup-20221004171839
2022-10-04 17:18:42 [Info] Elapsed time: 2.76s
2022-10-04 17:18:42 [Info] Done

To back up specific tables only, use the --tables flag followed by a comma-separated list of tables, for example --tables=tbl1,tbl2. This will only backup the specified tables and ignore the rest.

‹›
  • Example
Example
📋
manticore-backup --backup-dir=/mnt/backup/ --tables=products
‹›
Response
Copyright (c) 2023-2024, Manticore Software LTD (https://manticoresearch.com)

Manticore config file: /etc/manticoresearch/manticore.conf
Tables to backup: products
Target dir: /mnt/backup/

Manticore config
  endpoint =  127.0.0.1:9308

Manticore versions:
  manticore: 5.0.3
  columnar: 1.16.1
  secondary: 0.0.0
2022-10-04 17:25:02 [Info] Starting the backup...
2022-10-04 17:25:02 [Info] Backing up config files...
2022-10-04 17:25:02 [Info]   config files - OK
2022-10-04 17:25:02 [Info] Backing up tables...
2022-10-04 17:25:02 [Info]   products (rt) [512B]...
2022-10-04 17:25:02 [Info]    OK
2022-10-04 17:25:02 [Info] Running sync
2022-10-04 17:25:06 [Info]  OK
2022-10-04 17:25:06 [Info] You can find backup here: /mnt/backup/backup-20221004172502
2022-10-04 17:25:06 [Info] Elapsed time: 4.82s
2022-10-04 17:25:06 [Info] Done

Arguments

Argument Description
--backup-dir=path This is the path to the backup directory where the backup will be stored. The directory must already exist. This argument is required and has no default value. On each backup run, manticore-backup will create a subdirectory in the provided directory with a timestamp in the name (backup-[datetime]), and will copy all required tables to it. So the --backup-dir is a container for all your backups, and it's safe to run the script multiple times.
--restore[=backup] Restore from --backup-dir. Just --restore lists available backups. --restore=backup will restore from <--backup-dir>/backup.
--force Skip versions check on restore and gracefully restore the backup.
--disable-telemetry Pass this flag in case you want to disable sending anonymized metrics to Manticore. You can also use environment variable TELEMETRY=0
--config=/path/to/manticore.conf Path to the Manticore configuration. Optional. If not provided, a default configuration for your operating system will be used. Used to determine the host and port for communication with the Manticore daemon. The manticore-backup tool supports dynamic configuration files. You can specify the --config option multiple times if your configuration is spread across multiple files.
--tables=tbl1,tbl2, ... Semicolon-separated list of tables that you want to back up. To back up all tables, omit this argument. All the provided tables must exist in the Manticore instance you are backing up from, or the backup will fail.
--compress Whether the backed up files should be compressed. Not enabled by default.
--unlock In rare cases when something goes wrong, tables can be left in a locked state. Use this argument to unlock them.
--version Show the current version.
--help Show this help.

BACKUP SQL command reference

You can also back up your data through SQL by running the simple command BACKUP TO /path/to/backup.

NOTE: BACKUP is not supported in Windows. Consider using mysqldump instead.

NOTE: BACKUP requires Manticore Buddy. If it doesn't work, make sure Buddy is installed.

General syntax of BACKUP

BACKUP
  [{TABLE | TABLES} a[, b]]
  [{OPTION | OPTIONS}
    async = {on | off | 1 | 0 | true | false | yes | no}
    [, compress = {on | off | 1 | 0 | true | false | yes | no}]
  ]
  TO path_to_backup

For instance, to back up tables a and b to the /backup directory, run the following command:

BACKUP TABLES a, b TO /backup

There are options available to control and adjust the backup process, such as:

  • async: makes the backup non-blocking, allowing you to receive a response with the query ID immediately and run other queries while the backup is ongoing. The default value is 0.
  • compress: enables file compression using zstd. The default value is 0. For example, to run a backup of all tables in async mode with compression enabled to the /tmp directory:
BACKUP OPTION async = yes, compress = yes TO /tmp

Important considerations

  1. The path should not contain special symbols or spaces, as they are not supported.
  2. Ensure that Manticore Buddy is launched (it is by default).

How backup maintains consistency of tables

To ensure consistency of tables during backup, Manticore Search's backup tools use the innovative FREEZE and UNFREEZE commands. Unlike the traditional lock and unlock tables feature of e.g. MySQL, FREEZE stops flushing data to disk while still permitting writing (to some extent) and selecting updated data from the table.

However, if your RAM chunk size grows beyond the rt_mem_limit threshold during lengthy backup operations involving many inserts, data may be flushed to disk, and write operations will be blocked until flushing is complete. Despite this, the tool maintains a balance between table locking, data consistency, and database write availability while the table is frozen.

When you use manticore-backup or the SQL BACKUP command, the FREEZE command is executed once and freezes all tables you are backing up simultaneously. The backup process subsequently backs up each table one by one, releasing the freeze after successfully backing up each table.

If backup fails or gets interrupted, the tool tries to unfreeze all the tables.

Restore by using manticore-backup tool

To restore a Manticore instance from a backup, use the manticore-backup command with the --backup-dir and --restore arguments. For example: manticore-backup --backup-dir=/path/to/backups --restore. If you don't provide any argument for --restore, it will simply list all the backups in the --backup-dir.

‹›
  • Example
Example
📋
manticore-backup --backup-dir=/mnt/backup/ --restore
‹›
Response
Copyright (c) 2023-2024, Manticore Software LTD (https://manticoresearch.com)

Manticore config file:
Backup dir: /tmp/

Available backups: 3
  backup-20221006144635 (Oct 06 2022 14:46:35)
  backup-20221006145233 (Oct 06 2022 14:52:33)
  backup-20221007104044 (Oct 07 2022 10:40:44)

To start a restore job, run manticore-backup with the flag --restore=backup name, where backup name is the name of the backup directory within the --backup-dir. Note that:

  1. There can't be any Manticore instance running on the same host and port as the one being restored.
  2. The old manticore.json file must not exist.
  3. The old configuration file must not exist.
  4. The old data directory must exist and be empty.

If all conditions are met, the restore will proceed. The tool will provide hints, so you don't have to memorize them. It's crucial to avoid overwriting existing files, so make sure to remove them prior to the restore if they still exist. Hence all the conditions.

‹›
  • Example
Example
📋
manticore-backup --backup-dir=/mnt/backup/ --restore=backup-20221007104044
‹›
Response
Copyright (c) 2023-2024, Manticore Software LTD (https://manticoresearch.com)

Manticore config file:
Backup dir: /tmp/
2022-10-07 11:17:25 [Info] Starting to restore...

Manticore config
  endpoint =  127.0.0.1:9308
2022-10-07 11:17:25 [Info] Restoring config files...
2022-10-07 11:17:25 [Info]   config files - OK
2022-10-07 11:17:25 [Info] Restoring state files...
2022-10-07 11:17:25 [Info]   config files - OK
2022-10-07 11:17:25 [Info] Restoring data files...
2022-10-07 11:17:25 [Info]   config files - OK
2022-10-07 11:17:25 [Info] The backup '/tmp/backup-20221007104044' was successfully restored.
2022-10-07 11:17:25 [Info] Elapsed time: 0.02s
2022-10-07 11:17:25 [Info] Done

Backup and restore with mysqldump

NOTE: some versions of mysqldump / mariadb-dump require Manticore Buddy. If the dump isn't working, make sure Buddy is installed.

To create a backup of your Manticore Search database, you can use the mysqldump command. We will use the default port and host in the examples.

Note, mysqldump is supported only for real-time tables.

‹›
  • Basic
  • Replace mode
  • Replication mode
📋
mysqldump -h0 -P9306 manticore > manticore_backup.sql
mariadb-dump -h0 -P9306 manticore > manticore_backup.sql

Executing this command will produce a backup file named manticore_backup.sql. This file will hold all data and table schemas.

Restore

If you're looking to restore a Manticore Search database from a backup file, the mysql client is your tool of choice.

Note, if you are restoring in Plain mode, you cannot drop and recreate tables directly. Therefore, you should:

  • Use mysqldump with the -t option to exclude CREATE TABLE statements from your backup.
  • Manually TRUNCATE the tables before proceeding with the restoration.
‹›
  • SQL
SQL
📋
mysql -h0 -P9306 < manticore_backup.sql
mariadb -h0 -P9306 < manticore_backup.sql

This command enables you to restore everything from the manticore_backup.sql file.

Additional options

Here are some more settings that can be used with mysqldump to tailor your backup:

  • -t skips drop/create table commands. Useful for full-text reindexation of a table after changing tokenization settings.
  • --no-data: This setting omits table data from the backup, resulting in a backup file that consists only of table schemas.
  • --ignore-table=[database_name].[table_name]: This option allows you to bypass a particular table during the backup operation. Note that the database name must be manticore.
  • --replace to perform replace instead of insert. Useful for full-text reindexation of a table after changing tokenization settings.
  • --net-buffer-length=16M to make batches up to 16 megabytes large for faster restoration.
  • -e to batch up documents. Useful for faster restoration.
  • -c to keep column names. Useful for reindexation of a table after changing its schema (e.g., changing field order).

For a comprehensive list of settings and their thorough descriptions, kindly refer to the official MySQL documentation or MariaDB documentation.

Notes

  • To create a dump in replication mode (where the dump includes INSERT/REPLACE INTO <cluster_name>:<table_name>):
    • Use the cluster user. For example: mysqldump -u cluster ... or mariadb-dump -u cluster .... You can change the username that enables replication mode for mysqldump by running SET GLOBAL cluster_user = new_name.
    • Use the -t flag.
    • When specifying a table in replication mode, you need to follow the cluster_name:table_name syntax. For example: mysqldump -P9306 -h0 -t -ucluster manticore cluster:tbl.
  • It's recommended to explicitly specify the manticore database when you plan to back up all databases, instead of using the --all-databases option.
  • Note that mysqldump does not support backing up distributed tables and cannot back up tables containing non-stored fields. For such cases, consider using manticore-backup or the BACKUP SQL command. If you have distributed tables, it is recommended to always specify the tables to be dumped.

Real-time table structure

A plain table can be created from an external source using a special tool called indexer, which reads a "recipe" from the configuration, connects to the data sources, pulls documents, and builds table files. This is a lengthy process. If your data changes, the table becomes outdated, and you need to rebuild it from the refreshed sources. If your data changes incrementally, such as a blog or newsfeed where old documents never change and only new ones are added, the rebuild will take more and more time, as you will need to process the archive sources again and again with each pass.

One way to deal with this problem is by using several tables instead of one solid table. For example, you can process sources produced in previous years and save the table. Then, take only sources from the current year and put them into a separate table, rebuilding it as often as necessary. You can then place both tables as parts of a distributed table and use it for querying. The point here is that each time you rebuild, you only process data from the last 12 months at most, and the table with older data remains untouched without needing to be rebuilt. You can go further and divide the last 12 months table into monthly, weekly, or daily tables, and so on.

This approach works, but you need to maintain your distributed table manually. That is, add new chunks, delete old ones, and keep the overall number of partial tables not too large (with too many tables, searching can become slower, and the OS usually limits the number of simultaneously opened files). To deal with this, you can manually merge several tables together by running indexer --merge. However, that only solves the problem of having many tables, making maintenance more challenging. And even with 'per-hour' reindexing, you will most likely have a noticeable time gap between new data arriving in sources and rebuilding the table, which populates this data for searching.

A real-time table is designed to solve this problem. It consists of two parts:

  1. A special RAM-based table (called RAM chunk) that contains portions of data arriving right now.
  2. A collection of plain tables called disk chunks that were built in the past.

This is very similar to a standard distributed table, made from several local tables.

You don't need to build such a table by running indexer, which reads a "recipe" from the config and tables data sources. Instead, the real-time table provides the ability to 'insert' new documents and 'replace' existing ones. When executing the 'insert' command, you push new documents to the server. It then builds a small table from the added documents and immediately brings it online. So, right after the 'insert' command completes, you can perform searches in all table parts, including the just-added documents.

The search server automatically maintains the table, so you don't have to worry about it. However, you might be interested in learning a few details about 'how it is maintained'.

First, since indexed data is stored in RAM - what about emergency power-off? Will I lose my table then? Well, before completion, the server saves new data into a special 'binlog'. This consists of one or several files, living on your persistent storage, which incrementally grows as you add more and more changes. You can adjust the behavior regarding how often new queries (or transactions) are stored in the binlog, and how often the 'sync' command is executed over the binlog file to force the OS to actually save the data on a safe storage. The most paranoid approach is to flush and sync after every transaction. This is the slowest but also the safest method. The least expensive way is to switch off the binlog entirely. This is the fastest method, but you risk losing your indexed data. Intermediate variants, like flush/sync every second, are also provided.

The binlog is designed specifically for sequential saving of newly arriving transactions; it is not a table and cannot be searched over. It is merely an insurance policy to ensure that the server will not lose your data. If a sudden disruption occurs and everything crashes due to a software or hardware problem, the server will load the freshest available dump of the RAM chunk and then replay the binlog, repeating stored transactions. Ultimately, it will achieve the same state as it was in at the moment of the last change.

Second, what about limits? What if I want to process, say, 10TB of data, but it just doesn't fit into RAM! RAM for a real-time table is limited and can be configured. When a certain amount of data is indexed, the server manages the RAM part of the table by merging together small transactions, keeping their number and overall size small. This process can sometimes cause delays during insertion, however. When merging no longer helps, and new insertions hit the RAM limit, the server converts the RAM-based table into a plain table stored on disk (called a disk chunk). This table is added to the collection of tables in the second part of the RT table and becomes accessible online. The RAM is then flushed, and the space is deallocated.

When the data from RAM is securely saved to disk, which occurs:

  • when the server saves the collected data as a disk table
  • or when it dumps the RAM part during a clean shutdown or by manual flushing

the binlog for that table is no longer necessary. So, it gets discarded. If all the tables are saved, the binlog will be deleted.

Third, what about disk collection? If having many disk parts makes searching slower, what's the difference if I make them manually in the distributed table manner, or they're produced as disk parts (or, 'chunks') by an RT table? Well, in both cases, you can merge several tables into one. For example, you can merge hourly tables from yesterday and keep one 'daily' table for yesterday instead. With manual maintenance, you have to think about the schema and commands yourself. With an RT table, the server provides the OPTIMIZE command, which does the same, but keeps you away from unnecessary internal details.

Fourth, if my "document" constitutes a 'mini-table' and I don't need it anymore, I can just throw it away. But if it is 'optimized', i.e. mixed together with tons of other documents, how can I undo or delete it? Yes, indexed documents are 'mixed' together, and there is no easy way to delete one without rebuilding the whole table. And if for plain tables rebuilding or merging is just a normal way of maintenance, for a real-time table it keeps only the simplicity of manipulation, but not 'real-timeness'. To address the problem, Manticore uses a trick: when you delete a document, identified by document ID, the server just tracks the number. Together with other deleted documents, their IDs are saved in a so-called kill-list. When you search over the table, the server first retrieves all matching documents, and then throws out the documents that are found in the kill-list (that is the most basic description; in fact, internally it's more complex). The point is - for the sake of 'immediate' deletion, documents are not actually deleted, but are just marked as 'deleted'. They still occupy space in different table structures, being essentially garbage. Word statistics, which affect ranking, also aren't affected, meaning it works exactly as it is declared: we search among all documents, and then just hide ones marked as deleted from the final result. When a document is replaced, it means that it is killed in the old parts of the table and is inserted again in the freshest part. All consequences of 'hiding by killlist' are also in play in this case.

When a rebuild of some part of a table happens, e.g., when some transactions (segments) of a RAM chunk are merged, or when a RAM chunk is converted into a disk chunk, or when two disk chunks are merged together, the server performs a comprehensive iteration over the affected parts and physically excludes deleted documents from all of them. That is, if they were in document lists of some words - they are wiped away. If it was a unique word - it gets removed completely.

As a summary: the deletion works in two phases:

  1. First, we mark documents as 'deleted' in real-time and suppress them in search results.
  2. During some operation with an RT table chunk, we finally physically wipe the deleted documents for good.

Fifth, if an RT table contains plain disk tables in its collection, can I just add my ready old disk table to it? No. It's not possible to avoid unneeded complexity and prevent accidental corruption. However, if your RT table has just been created and contains no data, then you can ATTACH TABLE your disk table to it. Your old table will be moved inside the RT table and will become its part.

As a summary about the RT table structure: it is a cleverly organized collection of plain disk tables with a fast in-memory table, intended for real-time insertions and semi-real-time deletions of documents. The RT table has a common schema, common settings, and can be easily maintained without deep digging into details.