Compiling Manticore Search from sources enables custom build configurations, such as disabling certain features or adding new patches for testing. For example, you may want to compile from sources and disable the embedded ICU in order to use a different version installed on your system that can be upgraded independently of Manticore. This is also useful if you are interested in contributing to the Manticore Search project.
To prepare official release and development packages, we use Docker and a special building image. This image includes essential tooling and is designed to be used with external sysroots, so one container can build packages for all operating systems. You can build the image using the Dockerfile and README or use an image from Docker Hub. This is the easiest way to create binaries for any supported operating system and architecture. You'll also need to specify the following environment variables when running the container:
DISTR: the target platform
arch: the architecture
SYSROOT_URL: the URL to the system roots archives. You can use https://repo.manticoresearch.com/repository/sysroots unless you are building the sysroots yourself (instructions can be found here).
- Use the CI yaml file as a reference to find the other environment variables you might need to use - https://github.com/manticoresoftware/manticoresearch/blob/master/dist/gitlab-release.yml
To find possible values for
arch, you can use the directory https://repo.manticoresearch.com/repository/sysroots/roots_with_zstd/ as a reference, as it includes sysroots for all supported combinations.
After that, building packages inside the Docker container is as easy as calling:
cmake -DPACK=1 /path/to/sources cmake --build .
For example, to create the same RedHat 7 package as the official one, but without the embedded ICU and its large datafile, you can execute the following (assuming that the sources are placed in
/manticore/sources/ on the host):
docker run -it --rm -e SYSROOT_URL=https://repo.manticoresearch.com/repository/sysroots \ -e arch=x86_64 \ -e DISTR=rhel7 \ -e boost=boost_rhel_feb17 \ -e sysroot=roots_nov22 \ -v /manticore/sources:/manticore_aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa \ manticoresearch/external_toolchain:clang15_cmake3243 bash # following is to be run inside docker shell cd /manticore_aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ RELEASE_TAG="noicu" mkdir build && cd build cmake -DPACK=1 -DBUILD_TAG=$RELEASE_TAG -DWITH_ICU_FORCE_STATIC=0 .. cmake --build . --target package
The long source directory path is required or it may fail to build the sources.
The same process can be used to build binaries/packages not only for popular Linux distributions, but also for FreeBSD, Windows, and macOS.
Compiling Manticore without using the building Docker is not recommended, but if you need to do it, here's what you may need to know:
- C++ compiler
- In Linux - GNU (4.7.2 and above) or Clang can be used
- In Windows - Microsoft Visual Studio 2019 and above (community edition is enough)
- On macOS - Clang (from command line tools of XCode, use
xcode-select --installto install).
- Bison, Flex - on most systems, they are available as packages, on Windows they are available in the cygwin framework.
- Cmake - used on all platforms (version 3.19 or above required)
Manticore source code is hosted on GitHub.
To obtain the source code, clone the repository and then check out the desired branch or tag. The branch
master represents the main development branch. Upon release, a versioned tag is created, such as
3.6.0 and a new branch for the current release is started, in this case
manticore-3.6.0. The head of the versioned branch after all changes is used as source to build all binary releases. For example, to take sources of version 3.6.0 you can run:
git clone https://github.com/manticoresoftware/manticoresearch.git cd manticoresearch git checkout manticore-3.6.0
You can download the desired code from GitHub by using the "Download ZIP" button. Both .zip and .tar.gz formats are suitable.
wget -c https://github.com/manticoresoftware/manticoresearch/archive/refs/tags/3.6.0.tar.gz tar -zxf 3.6.0.tar.gz cd manticoresearch-3.6.0
Manticore uses CMake. Assuming you are inside the root directory of the cloned repository:
mkdir build && cd build cmake ..
CMake will investigate available features and configure the build according to them. By default, all features are considered enabled if they are available. The script also downloads and builds some external libraries, assuming that you want to use them. Implicitly, you get support for the maximal number of features.
You can also configure the build explicitly with flags and options. To enable feature
-DFOO=1 to the CMake call.
To disable it, use
-DFOO=0. If not explicitly noted, enabling a feature that is not available((such as
WITH_GALERA on an MS Windows build)) will cause the configuration to fail with an error. Disabling a feature, apart from excluding it from the build, also disables its investigation on the system and disables the downloading/building of any related external libraries.
- USE_SYSLOG - allows the use of
syslogin query logging.
- WITH_GALERA -Enables support for replication on the search daemon. Support will be configured for the build, and the sources for the Galera library will be downloaded, built, and included in the distribution/installation. Usually, it is safe to build with Galera, but not distribute the library itself (so no Galera module, no replication). However, sometimes you may need to explicitly disable it, such as if you want to build a static binary that by design cannot load any libraries, so that even the presence of a call to the 'dlopen' function inside the daemon will cause a link error.
- WITH_RE2 - Builds with the use of the RE2 regular expression library. This is necessary for functions like REGEX(), and the regexp_filter feature.
- WITH_RE2_FORCE_STATIC - Downloads the sources of RE2, compiles them, and links them statically, so that the final binaries will not depend on the presence of a shared
RE2library in your system.
- WITH_STEMMER - Builds with the use of the Snowball stemming library.
- WITH_STEMMER_FORCE_STATIC - Downloads the Snowball sources, compiles them, and links them statically, so that the final binaries will not depend on the presence of a shared
libstemmerlibrary in your system.
- WITH_ICU - Builds with the use of ICU, the International Components for Unicode library. This is used in tokenization of Chinese, for text segmentation. This is in play when morphology like
icu_chineseis in use.
- WITH_ICU_FORCE_STATIC - Downloads the ICU sources, compiles them, and links them statically, so that the final binaries will not depend on the presence of a shared
iculibrary in your system. Also includes the ICU data file into the installation/distribution. The purpose of a statically linked ICU is to have a library of a known version, so that behavior is determined and not dependent on any system libraries. You will most likely prefer to use the system ICU instead, as it may be updated over time without the need to recompile the Manticore daemon. In this case, you need to explicitly disable this option. This will also save you some space occupied by the ICU data file (about 30M), as it will not be included in the distribution.
- WITH_SSL - Used for support for HTTPS, and also encrypted MySQL connections to the daemon. The system OpenSSL library will be linked to the daemon. This implies that OpenSSL will be required to start the daemon. This is mandatory for support for HTTPS, but not strictly mandatory for the server (i.e. no SSL means no possibility to connect via HTTPS, but other protocols will work). SSL library versions starting from 1.0.2 to 1.1.1 may be used by Manticore, however note that for the sake of security it's highly recommended to use the freshest possible SSL library. For now only v1.1.1 is supported, the rest are outdated ( see openssl release strategy
- WITH_ZLIB - used by the indexer to work with compressed columns from MySQL. Used by the daemon to provide support for the compressed MySQL protocol.
- WITH_ODBC - used by the indexer to support indexing sources from ODBC providers (they're typically UnixODBC and iODBC). On MS Windows, ODBC is the proper way to work with MS SQL sources, so indexing of
MSSQLalso implies this flag.
- DL_ODBC - don't link with the ODBC library. If ODBC is linked, but not available, you can't start indexer tool even if you want to process something not related to ODBC. This option asks the indexer to load the library at runtime only when you want to deal with ODBC source.
- ODBC_LIB - name of the ODBC library file. The indexer will try to load that file when you want to process ODBC source. This option is written automatically from available ODBC shared library investigation. You can also override that name at runtime by providing the environment variable
ODBC_LIBwith the proper path to an alternative library before running the indexer.
- WITH_EXPAT - used by the indexer to support indexing xmlpipe sources.
- DL_EXPAT - don't link with the EXPAT library. If EXPAT is linked, but not available, you can't start
indexertool even if you want to process something not related to xmlpipe. This option asks the indexer to load the library at runtime only when you want to deal with xmlpipe source.
- EXPAT_LIB - name of the EXPAT library file. The indexer will try to load that file when you want to process xmlpipe source. This option is written automatically from available EXPAT shared library investigation. You can also override that name at runtime by providing the environment variable EXPAT_LIB with the proper path to an alternative library before running the indexer.
- WITH_ICONV - for support different encodings when indexing xmlpipe sources with the indexer.
- DL_ICONV - don't link with the iconv library. If iconv is linked, but not available, you can't start
indexertool even if you want to process something not related to xmlpipe. This option asks the indexer to load the library at runtime only when you want to deal with xmlpipe source.
- ICONV_LIB - name of the iconv library file. The indexer will try to load that file when you want to process xmlpipe source. This option is written automatically from the available iconv shared library investigation. You can also override that name at runtime by providing the environment variable
ICONV_LIBwith the proper path to an alternative library before running the indexer.
- WITH_MYSQL - used by the indexer to support indexing MySQL sources.
- DL_MYSQL - don't link with the MySQL library. If MySQL is linked, but not available, you can't start the
indexertool even if you want to process something not related to MySQL. This option asks the indexer to load the library at runtime only when you want to deal with a MySQL source.
- MYSQL_LIB -- name of the MySQL library file. The indexer will try to load that file when you want to process a MySQL source. This option is written automatically from the available MySQL shared library investigation. You can also override that name at runtime by providing the environment variable
MYSQL_LIBwith the proper path to an alternative library before running the indexer.
- WITH_POSTGRESQL - used by the indexer to support indexing PostgreSQL sources.
- DL_POSTGRESQL - don't link with the PostgreSQL library. If PostgreSQL is linked, but not available, you can't start the
indexerool even if you want to process something not related to PostgreSQL. This option asks the indexer to load the library at runtime only when you want to deal with a PostgreSQL source.
- POSTGRESQL_LIB - name of postgresql library file. The indexer will attempt to load the specified postgresql library file when processing a postgresql source. This option is automatically determined from available postgresql shared library investigation. You can also override the name at runtime by providing the environment variable
POSTGRESQL_LIBwith the proper path to an alternative library before running the indexer.
- LOCALDATADIR - default path where the daemon stores binlogs. If this path is not provided or explicitly disabled in the daemon's runtime config (i.e. the file
manticore.conf, which is not related to this build configuration), binlogs will be placed in this path. It is typically an absolute path, however, it is not required to be and relative paths can also be used. You probably would not need to change the default value defined by the configuration, which, depending on the target system, might be something like
- FULL_SHARE_DIR - default path where all assets are stored. It can be overridden by the environment variable
FULL_SHARE_DIRbefore starting any tool that utilizes files from that folder. This is an important path as many things are expected to be found there by default. These include predefined charset tables, stopwords, manticore modules, and icu data files, all placed in that folder. The configuration script usually determines this path to be something like
- DISTR_BUILD - a shortcut for the options for releasing packages. This is a string value with the name of the target platform. It can be used instead of manually configuring all the options. On Debian and Redhat Linuxes, the default value might be determined by light introspection and set to a generic 'Debian' or 'RHEL'. Otherwise, the value is not defined.
- PACK - an even more convenient shortcut. It reads the
DISTRenvironment variable, assigns it to the DISTR_BUILD parameter, and then works as usual. This is very useful when building in prepared build systems, like Docker containers, where the
DISTRvariable is set at the system level and reflects the target system for which the container is intended.
- CMAKE_INSTALL_PREFIX (path) - where Manticore is expected to be installed. Building does not perform any installations, but it prepares the installation rules that are executed when you run the
cmake --installcommand or create a package and then install it. The prefix can be changed at any time, even during installation, by invoking
cmake --install . --prefix /path/to/installation. However, at config time, this variable is used to initialize the default values of
FULL_SHARE_DIR. For example, setting it to
/my/customat configure time will hardcode
- BUILD_TESTING (bool) whether to support testing. If enabled, after the build, you can run 'ctest' and test the build. Note that testing implies additional dependencies, like at least the presence of PHP cli, Python, and an available MySQL server with a test database. By default, this parameter is on. So, for 'just build', you might want to disable the option by explicitly specifying 'off' value.
- LIBS_BUNDLE - path to a folder with different libraries. This is mostly relevant for Windows building, but may also be helpful if you have to build often in order to avoid downloading third-party sources each time. By default, this path is never modified by the configuration script; you should put everything there manually. When, say, we want support for a stemmer - the sources will be downloaded from Snowball homepage, then extracted, configured, built, etc. Instead, you can store the original source tarball (which is
libstemmer_c.tgz) in this folder. Next time you want to build from scratch, the configuration script will first look up in the bundle, and if it finds the stemmer there, it will not download it again from the Internet.
- CACHEB - path to a folder with stored builds of 3-rd party libraries. Usually features like galera, re2, icu, etc. first downloaded or being got from bundle, then unpacked, built, and installed into a temporary internal folder. When building manticore, that folder is then used as the place where the things required to support the asked feature are live. Finally, they either link with manticore, if it is a library; either go directly to distribution/installation (like galera or icu data). When CACHEB is defined either as cmake config param, either as a system environment variable, it is used as the target folder for that builds. This folder might be kept across builds, so that stored libraries there will not be rebuilt anymore, making the whole build process much shorter.
Note, that some options are organized in triples:
XXX_LIB - like support of mysql, odbc, etc.
WITH_XXX determines whether next two have an effect or not. I.e., if you set
0 - there is no sence to provide
ODBC_LIB, and these two will have no effect if the whole feature is disabled. Also,
XXX_LIB has no sense without
DL_XXX, because if you don't want
DL_XXX option, dynamic loading will not be used, and name provided by
XXX_LIB is useless. That is used by default introspection.
Also, using the
iconv library assumes
expat and is useless if the last is disabled.
Also, some libraries may be always available, and so, there is no sense to avoid linkage with them. For example, in Windows that is ODBC. On macOS that is Expat, iconv, and m.b. others. Default introspection determines such libraries and effectively emits only
WITH_XXX for them, without
XXX_LIB, that makes the things simpler.
With some options in game configuring might look like:
mkdir build && cd build cmake -DWITH_MYSQL=1 -DWITH_RE2=1 ..
Apart general configuration values, you may also investigate file
CMakeCache.txt which is left in build folder right after you run configuration. Any values defined there might be redefined explicitly when running cmake. For example, you may run
cmake -DHAVE_GETADDRINFO_A=FALSE ..., and that config run will not assume investigated value of that variable, but will use one you've provided.
Environment variables are useful for providing some kind of global settings which are stored aside from build configuration and are always present. For persistence, they may be set globally on the system using different ways - like adding them to the
.bashrc file, or embedding them into a Dockerfile if you produce a docker-based build system, or writing them in system preferences environment variables on Windows. Also, you may set them short-lived using
export VAR=value in the shell. Or even shorter, by prepending values to the cmake call, like
CACHEB=/my/cache cmake ... - this way it will only work on this call and will not be visible on the next.
Some of such variables are known to be used in general by cmake and some other tools. That is things like
CXX which determines the current C++ compiler, or
CXX_FLAGS to provide compiler flags, etc.
However, we have some variables that are specific to manticore configuration, which are invented solely for our builds.
- CACHEB - same as the config CACHEB option
- LIBS_BUNDLE - same as the config LIBS_BUNDLE option
- DISTR - used to initialize the
- DIAGNOSTIC - makes the output of cmake configuration much more verbose, explaining everything happening
- WRITEB - assumes LIBS_BUNDLE and, if set, will download source archive files for different tools to the LIBS_BUNDLE folder. That is, if a fresh version of the stemmer comes out - you can manually remove libstemmer_c.tgz from the bundle and then run a one-shot
WRITEB=1 cmake ...- it will not find the stemmer's sources in the bundle and will then download them from the vendor's site to the bundle (without WRITEB it will download them into a temporary folder inside the build and will disappear when you wipe the build folder).
At the end of configuration, you may see what is available and will be used in a list like this one:
-- Enabled features compiled in: * Galera, replication of tables * re2, a regular expression library * stemmer, stemming library (Snowball) * icu, International Components for Unicode * OpenSSL, for encrypted networking * ZLIB, for compressed data and networking * ODBC, for indexing MSSQL (windows) and generic ODBC sources with indexer * EXPAT, for indexing xmlpipe sources with indexer * Iconv, for support of different encodings when indexing xmlpipe sources with indexer * MySQL, for indexing MySQL sources with indexer * PostgreSQL, for indexing PostgreSQL sources with indexer
cmake --build . --config RelWithDebInfo
To install run:
cmake --install . --config RelWithDebInfo
to install into custom (non-default) folder, run
cmake --install . --prefix path/to/build --config RelWithDebInfo
For building a package, use the target
package. It will build the package according to the selection provided by the
-DDISTR_BUILD option. By default, it will be a simple .zip or .tgz archive with all binaries and supplementary files.
cmake --build . --target package --config RelWithDebInfo
If you haven't changed the path for sources and build, simply move to your build folder and run:
cmake . cmake --build . --clean-first --config RelWithDebInfo
If by any reason it doesn't work, you can delete file
CMakeCache.txt located in the build folder. After this step you
have to run cmake again, pointing to the source folder and configuring the options.
If it also doesn't help, just wipe out your build folder and begin from scratch.
Briefly - just use
--config RelWithDebInfo as written above. It will make no mistake.
We use two build types. For development, it is
Debug - it assigns compiler flags for optimization and other things in a way that it is very friendly for development, meaning the debug runs with step-by-step execution. However, the produced binaries are quite large and slow for production.
For releasing, we use another type -
RelWithDebInfo - which means 'release build with debug info'. It produces production binaries with embedded debug info. The latter is then split away into separate debuginfo packages which are stored aside with release packages and might be used in case of some issues like crashes - for investigation and bugfixing. Cmake also provides
MinSizeRel, but we don't use them. If the build type is not available, cmake will make a
There are two types of generators: single-config and multi-config.
- Single-config needs the build type provided during configuration, via the
CMAKE_BUILD_TYPEparameter. If it is not defined, the build will fall back to the
RelWithDebInfotype which is suitable if you just want to build Manticore from sources and not participate in development. For explicit builds, you should provide a build type, like
- Multi-config selects the build type during the build. It should be provided with the
--configoption, otherwise it will build a kind of
noconfig, which is not desirable. So, you should always specify the build type, like
If you want to specify the build type but don't want to care about whether it is a 'single' or 'multi' config generator - just provide the necessary keys in both places. I.e., configure with
-DCMAKE_BUILD_TYPE=Debug, and then build with
--config Debug. Just be sure that both values are the same. If the target builder is a single-config, it will consume the configuration param. If it is multi-config, the configuration param will be ignored, but the correct build configuration will be selected by the
If you want
RelWithDebInfo (i.e. just build for production) and know you're on a single-config platform (that is all, except Windows) - you can omit the
--config flag on the cmake invocation. The default
CMAKE_BUILD_TYPE=RelWithDebInfo will be configured then, and used. All the commands for 'building', 'installation' and 'building package' will become shorter then.
Cmake is the tool that doesn't perform building by itself, but it generates rules for the local build system.
Usually, it determines the available build system well, but sometimes you might need to provide a generator explicitly. You
cmake -G and review the list of available generators.
- On Windows, if you have more than one version of Visual Studio installed, you might need to specify which one to use,
cmake -G "Visual Studio 16 2019" ....
- On all other platforms - usually Unix Makefiles are used, but you can specify another one, such as Ninja, or Ninja Multi-Config, as:
cmake -GNinja ...
cmake -G"Ninja Multi-Config" ...
Ninja Multi-Config is quite useful as it is really 'multi-config' and available on Linux/macOS/BSD. With this generator, you may shift the choosing of configuration type to build time, and also you may build several configurations in one and the same build folder, changing only the
- If you want to finally build a full-featured RPM package, the path to the build directory must be long enough in order to correctly build debug symbols.
/manticore012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789, for example. That is because RPM tools modify the path over compiled binaries when building debug info, and it can just write over existing room and won't allocate more. The aforementioned long path has 100 chars and that is quite enough for such a case.
Some libraries should be available if you want to use them.
- For indexing (
postgresql. Without them, you can only process
- For serving queries (
opensslmight be necessary.
- For all (required, mandatory!) we need the Boost library. The minimal version is 1.61.0, however, we build the binaries with a fresher version 1.75.0. Even more recent versions (like 1.76) should also be okay. On Windows, you can download pre-built Boost from their site (boost.org) and install it into the default suggested path (i.e.
C:\\boost...). On MacOs, the one provided in brew is okay. On Linux, you can check the available version in official repositories, and if it doesn't match requirements, you can build from sources. We need the component 'context', you can also build components 'system' and 'program_options', they will be necessary if you also want to build Galera library from the sources. Look into
dist/build_dockers/xxx/boost_175/Dockerfilefor a short self-documented script/instruction on how to do it.
On the build system, you need the 'dev' or 'devel' versions of these packages installed (i.e. - libmysqlclient-devel, unixodbc-devel, etc. Look to our dockerfiles for the names of concrete packages).
On run systems, these packages should be present at least in the final (non-dev) variants. (devel variants usually larger, as they include not only target binaries, but also different development stuff like include headers, etc.).
Apart from necessary prerequisites, you might need prebuilt
postgresql client libraries. You have to either build them yourself or contact us to get our build bundle (a simple zip archive where the folder with these targets is located).
- ODBC is not necessary as it is a system library.
- OpenSSL might be built from sources or downloaded prebuilt from https://slproweb.com/products/Win32OpenSSL.html (as mentioned in the cmake internal script on FindOpenSSL).
- Boost might be downloaded pre-built from https://www.boost.org/ releases.
indexer -h. It will show which features were configured and built (whether they're explicit or investigated, doesn't matter):
Built on Linux x86_64 by GNU 8.3.1 compiler. Configured with these definitions: -DDISTR_BUILD=rhel8 -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1 -DWITH_STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ODBC=1 -DDL_ODBC=1 -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1 -DMYSQL_LIB=libmariadb.so.3 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq.so.5 -DLOCALDATADIR=/var/lib/manticore/data -DFULL_SHARE_DIR=/usr/share/manticore
Manticore Search 2.x maintains compatibility with Sphinxsearch 2.x and can load existing tables created by Sphinxsearch. In most cases, upgrading is just a matter of replacing the binaries.
Instead of sphinx.conf (in Linux normally located at
/etc/sphinxsearch/sphinx.conf) Manticore by default uses
/etc/manticoresearch/manticore.conf. It also runs under a different user and use different folders.
Systemd service name has changed from
manticore and the service runs under user
manticore (Sphinx was using
sphinxsearch). It also uses a different folder for the PID file.
The folders used by default are
/var/run/manticore. You can still use the existing Sphinx config, but you need to manually change permissions for
/var/log/sphinxsearch folders. Or, just rename globally 'sphinx' to 'manticore' in system files. If you use other folders (for data, wordforms files etc.) the ownership must be also switched to user
pid_file location should be changed to match the manticore.service to
If you want to use the Manticore folder instead, the table files need to be moved to the new data folder (
/var/lib/manticore) and the permissions must be changed to user
Upgrading from Sphinx / Manticore 2.x to 3.x is not straightforward, as the table storage engine has undergone a significant upgrade and the new searchd cannot load older tables and upgrade them to the new format on-the-fly.
Manticore Search 3 got a redesigned table storage. Tables created with Manticore/Sphinx 2.x cannot be loaded by Manticore Search 3 without a conversion. Because of the 4GB limitation, a real-time table in 2.x could still have several disk chunks after an optimize operation. After upgrading to 3.x, these tables can now be optimized to 1-disk chunk with the usual OPTIMIZE command. Index files also changed. The only component that didn't get any structural changes is the
.spp file (hitlists).
.sps (strings/json) and
.spm (MVA) are now held by
.spb (var-length attributes). The new format has an
.spm file present, but it's used for row map (previously it was dedicated for MVA attributes). The new extensions added are
.spt (docid lookup),
.sphi ( secondary index histograms),
.spds (document storage). In case you are using scripts that manipulate table files, they should be adapted for the new file extensions.
The upgrade procedure may differ depending on your setup (number of servers in the cluster, whether you have high availability or not, etc.), but in general, it involves creating new 3.x table versions and replacing your existing ones, as well as replacing older 2.x binaries with the new ones.
There are two special requirements to take care:
- Real-time tables need to be flushed using FLUSH RAMCHUNK
- Plain tables with kill-lists require adding a new directive in table configuration (see killlist_target)
Manticore Search 3 includes a new tool - index_converter - that can convert Sphinx 2.x / Manticore 2.x tables to 3.x format.
index_converter comes in a separate package which should be installed first. Using the convert tool create 3.x versions of your tables.
index_converter can write the new files in the existing data folder and backup the old files or it can write the new files to a chosen folder.
If you have a single server:
- Install the manticore-converter package
- Use index_converter to create new versions of the tables in a different folder than the existing data folder (using the
- Stop the existing Manticore/Sphinx, upgrade to 3.0, move the new tables to the data folder, and start Manticore
To minimize downtime, you can copy 2.x tables, config (you'll need to edit paths here for tables, logs, and different ports), and binaries to a separate location and start this on a separate port. Point your application to it. After upgrading to 3.0 and the new server is started, you can point the application back to the normal ports. If everything is good, stop the 2.x copy and delete the files to free up space.
If you have a spare box (like a testing or staging server), you can do the table upgrade there first and even install Manticore 3 to perform several tests. If everything is okay, copy the new table files to the production server. If you have multiple servers that can be pulled out of production, do it one by one and perform the upgrade on each. For distributed setups, 2.x searchd can work as a master with 3.x nodes, so you can do the upgrading on the data nodes first, and then on the master node.
There have been no changes made to the way clients should connect to the engine, or any changes to the querying mode or behavior of queries.
Kill-lists have been redesigned in Manticore Search 3. In previous versions, kill-lists were applied to the result set provided by each previously searched table at query time.
Thus, in 2.x, the table order at query time mattered. For example, if a delta table had a kill-list, in order to apply it against the main table, the order had to be main, delta (either in a distributed table or in the FROM clause).
In Manticore 3, kill-lists are applied to a table when it's loaded during searchd startup or gets rotated. The new directive
killlist_target in table configuration specifies target tables and defines which doc ids from the source table should be used for suppression. These can be ids from the defined kill-list, actual doc ids of the table or both.
Documents from the kill-lists are deleted from the target tables, they are not returned in results even if the search doesn't include the table that provided the kill-lists. Because of that, the order of tables for searching does not matter anymore. Now,
delta, main and
main, delta will provide the same results.
In previous versions, tables were rotated following the order from the configuration file. In Manticore 3 table rotation order is much smarter and works in accordance with killlist targets. Before starting to rotate tables, the server looks for chains of tables by
killlist_target definitions. It will then first rotate tables not referenced anywhere as kill-lists targets. Next, it will rotate tables targeted by already rotated tables and so on. For example, if we do
indexer --all and we have 3 tables: main, delta_big (which targets at the main) and delta_small (with target at delta_big), first, delta_small is rotated, then delta_big and finally the main. This is to ensure that when a dependent table is rotated it gets the most actual kill-list from other tables.
docinfo- everything is now extern
inplace_docinfo_gap- not needed anymore
mva_updates_pool- MVAs don’t have anymore a dedicated pool for updates, as now they can be updated directly in the blob (see below).
String, JSON and MVA attributes can be updated in Manticore 3.x using
In 2.x string attributes required
REPLACE, for JSON it was only possible to update scalar properties (as they were fixed-width) and MVAs could be updated using the MVA pool. Now updates are performed directly on the blob component. One setting that may require tuning is attr_update_reserve which allows changing the allocated extra space at the end of the blob used to avoid frequent resizes in case the new values are bigger than the existing values in the blob.
Doc ids used to be UNSIGNED 64-bit integers. Now they are POSITIVE SIGNED 64-bit integers.
Read here about the RT mode
Manticore 3.x recognizes and parses special suffixes which makes easier to use numeric values with special meaning. Common form for them is integer number + literal, like 10k or 100d, but not 40.3s (since 40.3 is not integer), or not 2d 4h (since there are two, not one value). Literals are case-insensitive, so 10W is the same as 10w. There are 2 types of such suffixes currently supported:
- Size suffixes - can be used in parameters that define size of something (memory buffer, disk file, limit of RAM, etc. ) in bytes. "Naked" numbers in that places mean literally size in bytes (octets). Size values take suffix
kfor kilobytes (1k=1024),
mfor megabytes (1m=1024k),
gfor gigabytes (1g=1024m) and
tfor terabytes (1t=1024g).
- Time suffixes - can be used in parameters defining some time interval values like delays, timeouts, etc. "Naked" values for those parameters usually have documented scale, and you must know if their numbers, say, 100, means '100 seconds' or '100 milliseconds'. However instead of guessing you just can write suffixed value and it will be fully determined by it's suffix. Time values take suffix
usfor useconds (microseconds),
dfor days and
index_converter is a tool for converting tables created with Sphinx/Manticore Search 2.x to the Manticore Search 3.x table format. The tool can be used in several different ways:
$ index_converter --config /home/myuser/manticore.conf --index tablename
$ index_converter --config /home/myuser/manticore.conf --all
$ index_converter --path /var/lib/manticoresearch/data --all
The new version of the table is written by default in the same folder. The previous version's files are saved with the
.old extension in their name. An exception is the
.spp (hitlists) file, which is the only table component that didn't have any changes in the new format.
You can save the new table version to a different folder using the
$ index_converter --config /home/myuser/manticore.conf --all --output-dir /new/path
A special case is for tables containing kill-lists. As the behaviour of how kill-lists works has changed (see killlist_target), the delta table should know which are the target tables for applying the kill-lists. There are 3 ways to have a converted table ready for setting targeted tables for applying kill-lists:
-–killlist-targetwhen converting a table
$ index_converter --config /home/myuser/manticore.conf --index deltaindex --killlist-target mainindex:kl
- Add killlist_target in the configuration before doing the conversion
- use ALTER ... KILLIST_TARGET command after conversion
Here's the complete list of
-c <file>for short) tells index_converter to use the given file as its configuration. Normally, it will look for manticore.conf in the installation directory (e.g.
/usr/local/manticore/etc/manticore.confif installed into
/usr/local/sphinx), followed by the current directory you are in when calling index_converter from the shell.
--indexspecifies which table should be converted
--path- instead of using a config file, a path containing table(s) can be used
--strip-path- strips path from filenames referenced by table: stopwords, exceptions and wordforms
--large-docid- allows to convert documents with ids larger than 2^63 and display a warning, otherwise it will just exit on the large id with an error. This option was added as in Manticore 3.x doc ids are signed bigint, while previously they were unsigned
--output-dir <dir>- writes the new files in a chosen folder rather than the same location as with the existing table files. When this option set, existing table files will remain untouched at their location.
--all- converts all tables from the config
--killlist-target <targets>sets the target tables for which kill-lists will be applied. This option should be used only in conjunction with the
You can install and start Manticore easily on various operating systems, including Ubuntu, Centos, Debian, Windows, and MacOS. Additionally, you can also use Manticore as a Docker container.
wget https://repo.manticoresearch.com/manticore-repo.noarch.deb sudo dpkg -i manticore-repo.noarch.deb sudo apt update sudo apt install manticore manticore-columnar-lib sudo systemctl start manticore
By default Manticore is waiting for your connections on:
- port 9306 for MySQL clients
- port 9308 for HTTP/HTTPS connections
- port 9312 for connections from other Manticore nodes and clients based on Manticore binary API
mysql -h0 -P9306
Let's now create a table called "products" with 2 fields:
- title - full-text field which will contain our product's title
- price - of type "float"
Note that it is possible to omit creating a table with an explicit create statement. For more information, see Auto schema.
create table products(title text, price float) morphology='stem_en';
Query OK, 0 rows affected (0.02 sec)
insert into products(title,price) values ('Crossbody Bag with Tassel', 19.85), ('microfiber sheet set', 19.99), ('Pet Hair Remover Glove', 7.99);
Query OK, 3 rows affected (0.01 sec)
Let's find one of the documents. The query we will use is 'remove hair'. As you can see, it finds a document with the title 'Pet Hair Remover Glove' and highlights 'Hair remover' in it, even though the query has "remove", not "remover". This is because when we created the table, we turned on using English stemming (
select id, highlight(), price from products where match('remove hair');
+---------------------+-------------------------------+----------+ | id | highlight() | price | +---------------------+-------------------------------+----------+ | 1513686608316989452 | Pet <strong>Hair Remover</strong> Glove | 7.990000 | +---------------------+-------------------------------+----------+ 1 row in set (0.00 sec)
Let's assume we now want to update the document - change the price to 18.5. This can be done by filtering by any field, but normally you know the document id and update something based on that.
update products set price=18.5 where id = 1513686608316989452;
Query OK, 1 row affected (0.00 sec)