▪️ Server settings
The below settings are to be used in section
searchd in the configuration file and control Manticore Search server behaviour.
Instance-wide defaults for access_plain_attrs. Optional, default value is
This directive lets you specify the default value of access_plain_attrs for all indexes served by this copy of searchd. Per-index directives take precedence, and will overwrite this instance-wide default value, allowing for fine-grain control.
Instance-wide defaults for access_blob_attrs. Optional, default value is
This directive lets you specify the default value of access_blob_attrs for all indexes served by this copy of searchd. Per-index directives take precedence, and will overwrite this instance-wide default value, allowing for fine-grain control.
Instance-wide defaults for access_doclists. Optional, default value is
This directive lets you specify the default value of access_doclists for all indexes served by this copy of searchd. Per-index directives take precedence, and will overwrite this instance-wide default value, allowing for fine-grain control.
Instance-wide defaults for access_hitlists. Optional, default value is
This directive lets you specify the default value of access_hitlists for all indexes served by this copy of searchd. Per-index directives take precedence, and will overwrite this instance-wide default value, allowing for fine-grain control.
Instance-wide defaults for agent_connect_timeout parameter. The last defined in distributed (network) indexes.
Instance-wide defaults for agent_query_timeout parameter. The last defined in distributed (network) indexes, or also may be overrided per-query using
Integer, specifies how many times manticore will try to connect and query remote agents in distributed index before reporting fatal query error. Default is 0 (i.e. no retries). This value may be also specified on per-query basis using
OPTION retry_count=XXX clause. If per-query option exists, it will override the one specified in config.
Note, that if you use agent_mirrors in definition of your distributed index, then before every attempt of connect server will select different mirror, according to specified ha_strategy specified. In this case
agent_retry_count will be aggregated for all mirrors in a set.
For example, if you have 10 mirrors, and set
agent_retry_count=5, then server will retry up to 50 times, assuming average 5 tries per every of 10 mirrors. (in case of option
ha_strategy = roundrobin it will be actually so).
In the same time value provided as
retry_count option of agent definition serves as absolute limit. Other words,
[retry_count=2] option in agent definition means always at most 2 tries, no mean if you have 1 or 10 mirrors in a line.
Integer, in milliseconds (or special_suffixes). Specifies the delay sphinx rest before retrying to query a remote agent in case it fails. The value has sense only if non-zero agent_retry_count or non-zero per-query
retry_count specified. Default is 500. This value may be also specified on per-query basis using
OPTION retry_delay=XXX clause. If per-query option exists, it will override the one specified in config.
When calling Update to update document attributes in real-time, the changes are first written to in-memory copy of attributes. The updates are done in a memory mapped file, which means that the OS decides when to write these changes to disk. Once
searchd shuts down normally (via
SIGTERM being sent) it forces writing all the changes to disk.
It is also possible to tell
searchd to periodically write these changes back to disk to avoid them being lost. The time between those intervals is set with
attr_flush_period, in seconds (or special_suffixes).
It defaults to 0, which disables the periodic flushing, but flushing will still occur at normal shut-down.
attr_flush_period = 900 # persist updates to disk every 15 minutes
Binary log transaction flush/sync mode. Optional, default is 2 (flush every transaction, sync every second).
This directive controls how frequently will binary log be flushed to OS and synced to disk. Three modes are supported:
- 0, flush and sync every second. Best performance, but up to 1 second worth of committed transactions can be lost both on server crash, or OS/hardware crash.
- 1, flush and sync every transaction. Worst performance, but every committed transaction data is guaranteed to be saved.
- 2, flush every transaction, sync every second. Good performance, and every committed transaction is guaranteed to be saved in case of server crash. However, in case of OS/hardware crash up to 1 second worth of committed transactions can be lost.
For those familiar with MySQL and InnoDB, this directive is entirely similar to
innodb_flush_log_at_trx_commit. In most cases, the default hybrid mode 2 provides a nice balance of speed and safety, with full RT index data protection against server crashes, and some protection against hardware ones.
binlog_flush = 1 # ultimate safety, low speed
Maximum binary log file size. Optional, default is 268435456, or 256Mb.
A new binlog file will be forcibly opened once the current binlog file reaches this limit. This achieves a finer granularity of logs and can yield more efficient binlog disk usage under certain borderline workloads. 0 means do not reopen binlog file based on size.
binlog_max_log_size = 16M
Binary log (aka transaction log) files path. Optional, default is build-time configured data directory.
Binary logs are used for crash recovery of RT index data, and also of attributes updates of plain disk indices that would otherwise only be stored in RAM until flush. When logging is enabled, every transaction COMMIT-ted into RT index gets written into a log file. Logs are then automatically replayed on startup after an unclean shutdown, recovering the logged changes.
binlog_path directive specifies the binary log files location. It should contain just the path;
searchd will create and unlink multiple binlog.* files in that path as necessary (binlog data, metadata, and lock files, etc).
Empty value disables binary logging. That improves performance, but puts RT index data at risk.
WARNING: It is strongly recommended to always explicitly define 'binlog_path' option in your config. Otherwise, the default path, which in most cases is the same as working folder, may point to the folder with no write access (for example, /usr/local/var/data). In this case, the searchd will not start at all.
binlog_path = # disable logging binlog_path = /var/data # /var/data/binlog.001 etc will be created
Maximum time to wait between requests (in seconds or special_suffixes) when using persistent connections. Optional, default is five minutes.
client_timeout = 1h
Server libc locale. Optional, default is C.
Specifies the libc locale, affecting the libc-based collations. Refer to collations section for the details.
collation_libc_locale = fr_FR
Default server collation. Optional, default is libc_ci.
Specifies the default collation used for incoming requests. The collation can be overridden on a per-query basis. Refer to collations section for the list of available collations and other details.
collation_server = utf8_ci
For now - just path to the dir for replication internal files, optional.
In this directory server stores replication meta info and state such as cluster descriptions and list of indexes replicated to the current node in
manticore.json file in this directory and uses it as a default directory for cluster contents.
data_dir = /var/manticore
Maximum size of document blocks from document storage that are held in memory. Optional, default is 16m (16 megabytes).
stored_fields is used, document blocks are read from disk and uncompressed. Since every block typically holds several documents, it may be reused when processing the next document. For this purpose, the block is held in a server-wide cache. The cache holds uncompressed blocks.
docstore_cache_size = 8m
The maximum number of expanded keywords for a single wildcard. Optional, default is 0 (no limit).
When doing substring searches against indexes built with
dict = keywords enabled, a single wildcard may potentially result in thousands and even millions of matched keywords (think of matching 'a*' against the entire Oxford dictionary). This directive lets you limit the impact of such expansions. Setting
expansion_limit = N restricts expansions to no more than N of the most frequent matching keywords (per each wildcard in the query).
expansion_limit = 16
Specifies whether timed grouping in API and SQL will be calculated in local timezone, or in UTC. Optional, default is 0 (means 'local tz').
By default all 'group by time' expressions (like group by day, week, month and year in API, also group by day, month, year, yearmonth, yearmonthday in SQL) is done using local time. I.e. when you have docs with attributes timed
13:00 utc and
15:00 utc - in case of grouping they both will fall into facility group according to your local tz setting. Say, if you live in
utc, it will be one day, but if you live in
utc+10, then these docs will be matched into different
group by day facility groups (since 13:00 utc in UTC+10 tz 23:00 local time, but 15:00 is 01:00 of the next day). Sometimes such behavior is unacceptable, and it is desirable to make time grouping not dependent from timezone. Of course, you can run the server with defined global TZ env variable, but it will affect not only grouping, but also timestamping in the logs, which may be also undesirable. Switching 'on' this option (either in config, either using SET global statement in SQL) will cause all time grouping expressions to be calculated in UTC, leaving the rest of time-depentend functions (i.e. logging of the server) in local TZ.
Agent mirror statistics window size, in seconds (or special_suffixes). Optional, default is 60.
For a distributed index with agent mirrors in it (see more in agent, master tracks several different per-mirror counters. These counters are then used for failover and balancing. (Master picks the best mirror to use based on the counters.) Counters are accumulated in blocks of
After beginning a new block, master may still use the accumulated values from the previous one, until the new one is half full. Thus, any previous history stops affecting the mirror choice after 1.5 times ha_period_karma seconds at most.
Despite that at most 2 blocks are used for mirror selection, upto 15 last blocks are actually stored, for instrumentation purposes. They can be inspected using SHOW AGENT STATUS statement.
ha_period_karma = 2m
Interval between agent mirror pings, in milliseconds (or special_suffixes). Optional, default is 1000.
For a distributed index with agent mirrors in it (see more in agent), master sends all mirrors a ping command during the idle periods. This is to track the current agent status (alive or dead, network roundtrip, etc). The interval between such pings is defined by this directive. To disable pings, set ha_ping_interval to 0.
ha_ping_interval = 3s
Hostnames renew strategy. By default, IP addresses of agent host names are cached at server start to avoid extra flood to DNS. In some cases the IP can change dynamically (e.g. cloud hosting) and it might be desired to don't cache the IPs. Setting this option to 'request' disabled the caching and queries the DNS at each query. The IP addresses can also be manually renewed with
FLUSH HOSTNAMES command.
Defines how many "jobs" can be in the queue at the same time. Unlimited by default.
In most cases "job" means one query to a single local index (plain index or a disk chunk of a real-time index), i.e. if you have a distributed index consisting of 2 local indexes or a real-time index which has 2 disk chunks a search query to either of them will mostly put 2 jobs to the queue and then the thread pool whose size is defined by threads will process them, but in some cases if the query is too complex more jobs can be created. Changing this setting recommended when max_connections and thread are not enough to find a balance between the desired performance and load on the server.
TCP listen backlog. Optional, default is 5.
Windows builds currently can only process the requests one by one. Concurrent requests will be enqueued by the TCP stack on OS level, and requests that can not be enqueued will immediately fail with "connection refused" message. listen_backlog directive controls the length of the connection queue. Non-Windows builds should work fine with the default value.
listen_backlog = 20
This setting lets you specify IP address and port, or Unix-domain socket path, that Manticore will accept connections on.
The general syntax for
listen = ( address ":" port | port | path | address ":" port start - port end ) [ ":" protocol [ "_vip" ]]
You can specify:
- either an IP address (or hostname) and a port number
- or just a port number
- or Unix socket path
- or an IP address and ports range
If you specify a port number, but not an address,
searchd will listen on all network interfaces. Unix path is identified by a leading slash. Ports range could be set only for the replication protocol.
You can also specify a protocol handler (listener) to be used for connections on this socket. The listeners are:
- Not specified - Manticore will accept connections at this port from:
- other Manticore agents (i.e. a remote distributed index)
- clients via HTTP and HTTPS
mysql- MySQL protocol for connections from MySQL clients. Note:
- Compressed protocol is also supported.
- If SSL is enabled you can make an encrypted connection.
replication- replication protocol, used for nodes communication. More details can be found in the replication section.
http- same as Not specified. Manticore will accept connections at this port from remote agents and clients via HTTP and HTTPS.
https- HTTPS protocol. Manticore will accept only HTTPS connections at this port. More details can be found in section SSL.
sphinx- legacy binary protocol. Used to serve connections from remote SphinxSE clients. Some Sphinx API clients implementations (an example is the Java one) require the explicit declaration of the listener.
_vip to any protocol (for instance
http_vip or just
_vip) forces creating a dedicated thread for the connection to bypass different limitations. That's useful for node maintenance in case of a severe overload when the server would either stall or not let you connect via a regular port otherwise.
listen = localhost listen = localhost:5000 # listen for remote agents and http/https requests on port 5000 at localhost listen = 192.168.0.1:5000 listen = /var/run/sphinx.s listen = 9312 listen = localhost:9306:mysql listen = 127.0.0.1:9308:http listen = 192.168.0.1:9320-9328:replication listen = 127.0.0.1:9443:https listen = 127.0.0.1:9312:sphinx
There can be multiple
searchd will listen for client connections on all specified ports and sockets. Default config provided in Manticore packages defines listening on ports 9308 and 9312 for connections from remote agents and non-mysql based clients and on port 9306 for MySQL connections. If no
listen directives are found then the server will listen on port 9312 for connections from remote agents and non-mysql based clients and on port 9306 for MySQL connections.
Unix-domain sockets are not supported on Windows.
This setting allows TCP_FASTOPEN flag for all listeners. By default it is managed by system, but may be explicitly switched off by setting to '0'.
For general knowledge about TCP Fast Open extension you can visit Wikipedia. Shortly speaking, it allows to eliminate one TCP round-trip when establishing connection.
In practice using TFO in many situation may optimize client-agent network efficiency as if persistent agents are in play, but without holding active connections, and also without limitation for the maximum num of connections.
On modern OS TFO support usually switched 'on' on the system level, but this is just 'capability', not the rule. Linux (as most progressive) supports it since 2011, on kernels starting from 3.7 (for server side). Windows supports it from some build of Windows 10. Another (FreeBSD, MacOS) also in game.
For Linux system server checks variable
/proc/sys/net/ipv4/tcp_fastopen and behaves according to it. Bit 0 manages client side, bit 1 rules listeners. By default system has this param set to 1, i.e. clients enabled, listeners disabled.
Log file name. Optional, default is 'searchd.log'. All
searchd run time events will be logged in this file.
Also you can use the 'syslog' as the file name. In this case the events will be sent to syslog daemon. To use the syslog option the sphinx must be configured
-–with-syslog on building.
log = /var/log/searchd.log
Limits the amount of queries per batch. Optional, default is 32.
Makes searchd perform a sanity check of the amount of the queries submitted in a single batch when using multi-queries. Set it to 0 to skip the check.
max_batch_queries = 256
Number of working threads (or, size of thread pool) of Manticore daemon. Manticore creates this number of OS threads on start, and they perform all jobs inside the daemon such as executing queries, creating snippets, etc. Some operations may be split into sub-tasks and executed in parallel, for example:
- Search in a real-time index
- Search in a distributed index consisting of local indexes
- Percolate query call
- and others
By default it's set to the number of CPU cores on the server. Manticore creates the threads on start and keep them until it's stopped. Each sub-task can use one of the threads when it needs it, when the sub-task finishes it releases the thread so another sub-task can use it.
In case of intensive I/O type of load it might make sense to set the value higher than the # of CPU cores.
threads = 10
Instance-wide limit of threads one operation can use. By default appropriate operations can occupy all CPU cores, leaving no room for other operations. Let's say,
call pq against considerably big percolate index can utilize all threads for tens of seconds. Setting
max_threads_per_query to, say, half of threads will ensure that you can run couple of such
call pq in parallel.
You can also set this setting as a session or a global variable during the runtime.
You can also control the behaviour on per-query with help of threads OPTION.
max_threads_per_query = 4
Maximum allowed per-query filter count. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 256.
max_filters = 1024
Maximum allowed per-filter values count. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 4096.
max_filter_values = 16384
Maximum num of files which allowed to be opened by server. Note that serving big fragmented rt-indexes may require this limit to be high. Say, if every disk chunk occupy dozen of files, rt-index from 1000 chunks will suppose to have thousand dozens of files keep opened simultaneously. So, one time you may face the error 'Too many open files' somewhere in logs. In this case try to manipulate with this option, it may help to solve the problem.
Apart this value (so called 'soft limit') there is also 'hard limit', which can't be exceed by the option.
Hard limit is defined by the system and on Linux may be changed in file
/etc/security/limits.conf. Another OSes may use different approaches here, consult your manuals for details.
max_open_files = 10000
Apart direct numeric values, you can use magic word 'max', to set the limit equal to available current hard limit.
max_open_files = max
Maximum allowed network packet size. Limits both query packets from clients, and response packets from remote agents in distributed environment. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 8M.
max_packet_size = 32M
A server version string to return via MySQL protocol. Optional, default is empty (return Manticore version).
Several picky MySQL client libraries depend on a particular version number format used by MySQL, and moreover, sometimes choose a different execution path based on the reported version number (rather than the indicated capabilities flags). For instance, Python MySQLdb 1.2.2 throws an exception when the version number is not in X.Y.ZZ format; MySQL .NET connector 6.3.x fails internally on version numbers 1.x along with a certain combination of flags, etc. To workaround that, you can use
mysql_version_string directive and have
searchd report a different version to clients connecting over MySQL protocol. (By default, it reports its own version.)
mysql_version_string = 5.0.37
Number of network threads, default is 1.
Useful for extremely high query rates, when just 1 thread is not enough to manage all the incoming queries.
Controls busy loop interval of a network thread, default is -1, might be set to -1, 0, positive integer.
In case server configured as pure master and routes requests to agents it is important to handle requests without delays and do not allow net-thread to sleep or cut out from CPU. Here is busy loop to do that. After incoming request, network thread use CPU poll for
10 * net_wait_tm milliseconds in case
net_wait_tm is positive number or polls only with CPU in case
0. Also busy loop might be disabled with
net_wait_tm = -1 - this way poller set timeout to actual agent's timeouts on system polling call.
WARNING: CPU busy loop actually loads CPU core, so setting this value to any non-default will cause noticeable CPU usage even with idle server.
Defines how many clients are accepted on each iteration of the network loop. Default is 0 (unlimited), which should be fine for most users. This is a fine tuning option to control the throughput of the network loop in high load scenarios.
Defines how many requests are processed on each iteration of the network loop. Default is 0 (unlimited), which should be fine for most users. This is a fine tuning option to control the throughput of the network loop in high load scenarios.
This setting lets you specify the network address of the node. By default it set to replication listen address. That is correct in most cases, however there are situations where you have to specify it manually:
- node behind a firewall
- network address translation enabled (NAT)
- container deployments, such as Docker or cloud deployments
- clusters with nodes in more than one region
node_address = 10.101.0.10
Whether to allow queries with only negation full-text operator. Optional, default is 0 (fail queries with only NOT operator).
not_terms_only_allowed = 1
Instance-wide defaults for ondisk_attrs directive. Optional, default is 0 (all attributes are loaded in memory). This directive lets you specify the default value of ondisk_attrs for all indexes served by this copy of searchd. Per-index directives take precedence, and will overwrite this instance-wide default value, allowing for fine-grain control.
WARNING: The functionality of this directive is taken over by access_plain_attrs and access_blob_attrs directives as of 3.0.2. The option is marked as deprecated and will be removed in future versions.
The maximum # of simultaneous persistent connections to remote persistent agents. Each time connecting agent defined under 'agent_persistent' we try to reuse xisting connection (if any), or connect and save the connection for the future. However we can't hold unlimited # of such persistent connections, since each one holds a worker on agent size (and finally we'll receive the 'maxed out' error, when all of them are busy). This very directive limits the number. It affects the num of connections to each agent's host, across all distributed indexes.
It is reasonable to set the value equal or less than max_connections option of the agents.
persistent_connections_limit = 29 # assume that each host of agents has max_connections = 30 (or 29).
searchd process ID file name. Mandatory.
PID file will be re-created (and locked) on startup. It will contain head server process ID while the server is running, and it will be unlinked on server shutdown. It's mandatory because Manticore uses it internally for a number of things: to check whether there already is a running instance of
searchd; to stop
searchd; to notify it that it should rotate the indexes. Can also be used for different external automation scripts.
pid_file = /var/run/searchd.pid
Costs for the query time prediction model, in nanoseconds. Optional, default is "doc=64, hit=48, skip=2048, match=64" (without the quotes).
predicted_time_costs = doc=128, hit=96, skip=4096, match=128
Terminating queries before completion based on their execution time (with max query time setting) is a nice safety net, but it comes with an inborn drawback: indeterministic (unstable) results. That is, if you repeat the very same (complex) search query with a time limit several times, the time limit will get hit at different stages, and you will get different result sets.
SELECT … OPTION max_query_time
There is a new option, SELECT … OPTION max_predicted_time, that lets you limit the query time and get stable, repeatable results. Instead of regularly checking the actual current time while evaluating the query, which is indeterministic, it predicts the current running time using a simple linear model instead:
predicted_time = doc_cost * processed_documents + hit_cost * processed_hits + skip_cost * skiplist_jumps + match_cost * found_matches
The query is then terminated early when the
predicted_time reaches a given limit.
Of course, this is not a hard limit on the actual time spent (it is, however, a hard limit on the amount of processing work done), and a simple linear model is in no way an ideally precise one. So the wall clock time may be either below or over the target limit. However, the error margins are quite acceptable: for instance, in our experiments with a 100 msec target limit the majority of the test queries fell into a 95 to 105 msec range, and all of the queries were in a 80 to 120 msec range. Also, as a nice side effect, using the modeled query time instead of measuring actual run time results in somewhat less gettimeofday() calls, too.
No two server makes and models are identical, so
predicted_time_costs directive lets you configure the costs for the model above. For convenience, they are integers, counted in nanoseconds.(The limit in max_predicted_time is counted in milliseconds, and having to specify cost values as 0.000128 ms instead of 128 ns is somewhat more error prone.) It is not necessary to specify all 4 costs at once, as the missed one will take the default values. However, we strongly suggest to specify all of them, for readability.
Whether to forcibly preopen all indexes on startup. Optional, default is 1 (preopen everything).
When set to 1, this directive overrides and enforces preopen on all indexes. They will be preopened, no matter what is the per-index
preopen setting. When set to 0, per-index settings can take effect. (And they default to 0.)
Pre-opened indexes avoid races between search queries and rotations that can cause queries to fail occasionally. They also make
searchd use more file handles. In most scenarios it's therefore preferred and recommended to preopen indexes.
preopen_indexes = 1
Integer, in bytes. The maximum RAM allocated for cached result sets. Default is 16777216, or 16Mb. 0 means disabled. Refer to query cache for details.
qcache_max_bytes = 16777216
Integer, in milliseconds. The minimum wall time threshold for a query result to be cached. Defaults to 3000, or 3 seconds. 0 means cache everything. Refer to query cache for details. This value also may be expressed with time special_suffixes, but use it with care and don't confuse yourself with name of the value itself, containing '_msec'.
Integer, in seconds. The expiration period for a cached result set. Defaults to 60, or 1 minute. The minimum possible value is 1 second. Refer to query cache for details. This value also may be expressed with time special_suffixes, but use it with care and don't confuse yourself with name of the value itself, containing '_sec'.
Query log format. Optional, allowed values are 'plain' and 'sphinxql', default is 'plain'.
The default one logs queries in a custom text format. The 'sphinxql' logs valid SQL statements. This directive allows to switch between the two formats on search server startup. The log format can also be altered on the fly, using
SET GLOBAL query_log_format=sphinxql syntax. Refer to Query logging for more discussion and format details.
query_log_format = sphinxql
Limit (in milliseconds) that prevents the query from being written to the query log. Optional, default is 0 (all queries are written to the query log). This directive specifies that only queries with execution times that exceed the specified limit will be logged (this value also may be expressed with time special_suffixes, but use it with care and don't confuse yourself with name of the value itself, containing
Query log file name. Optional, default is empty (do not log queries). All search queries will be logged in this file. The format is described in Query logging. In case of 'plain' format, you can use the 'syslog' as the path to the log file. In this case all search queries will be sent to syslog daemon with
LOG_INFO priority, prefixed with '[query]' instead of timestamp. To use the syslog option the sphinx must be configured
-–with-syslog on building.
query_log = /var/log/query.log
By default the searchd and query log files are created with 600 permission, so only the user under which server runs and root users can read the log files. query_log_mode allows settings a different permission. This can be handy to allow other users to be able to read the log files (for example monitoring solutions running on non-root users).
query_log_mode = 666
Maximum number of simultaneous client connections. Unlimited by default. That is usually noticeable only when using any kind of persistent connections, like cli mysql sessions or persistent remote connections from remote distributed indexes. When the limit is exceeded you can still connect to the server using the VIP connection
max_connections = 10
Network client request read/write timeout, in seconds (or special_suffixes). Optional, default is 5 seconds.
searchd will forcibly close a client connection which fails to send a query or read a result within this timeout.
read_timeout = 1
Per-keyword read buffer size. Optional, default is 256K
For every keyword occurrence in every search query, there are two associated read buffers (one for document list and one for hit list). This setting lets you control their sizes, increasing per-query RAM use, but possibly decreasing IO time. Minimal value is 8K. Apart general size, you may also tune buffers for document lists and hit lists individually, using read_buffer_docs and read_buffer_hits params.
read_buffer = 1M
Per-keyword read buffer size for document lists. Optional, default is 256K, minimal is 8K
This is same as read_buffer, but manages size for document lists only. If both params exist;
read_buffer_docs overrides more general
read_buffer. Also you may set read_buffer_docs on per-index basis; that value will override anything set on server's config level.
read_buffer_docs = 128K
Per-keyword read buffer size for hit lists. Optional, default is 256K, minimal is 8K
This is same as read_buffer, but manages size for hit lists only. If both params exist;
read_buffer_hits overrides more general
read_buffer. Also you may set read_buffer_hits on per-index basis; that valuewill override anything set on server's config level.
read_buffer_hits = 100M
Unhinted read size. Optional, default is 32K, minimal 1K
When querying, some reads know in advance exactly how much data is there to be read, but some currently do not. Most prominently, hit list size in not currently known in advance. This setting lets you control how much data to read in such cases. It impacted hit list IO time, reducing it for lists larger than unhinted read size, but raising it for smaller lists. It not affects RAM usage because read buffer will be already allocated. So it should be not greater than read_buffer.
read_unhinted = 32K
RT indexes RAM chunk flush check period, in seconds (or special_suffixes). Optional, default is 10 hours.
Actively updated RT indexes that however fully fit in RAM chunks can result in ever-growing binlogs, impacting disk use and crash recovery time. With this directive the search server performs periodic flush checks, and eligible RAM chunks can get saved, enabling consequential binlog cleanup. See Binary logging for more details.
rt_flush_period = 3600 # 1 hour
A maximum number of I/O operations (per second) that the RT chunks merge thread is allowed to start. Optional, default is 0 (no limit).
This directive lets you throttle down the I/O impact arising from the
OPTIMIZE statements. It is guaranteed that all the RT optimization activity will not generate more disk iops (I/Os per second) than the configured limit. Limiting rt_merge_iops can reduce search performance degradation caused by merging.
rt_merge_iops = 40
A maximum size of an I/O operation that the RT chunks merge thread is allowed to start. Optional, default is 0 (no limit).
This directive lets you throttle down the I/O impact arising from the
OPTIMIZE statements. I/Os bigger than this limit will be broken down into 2 or more I/Os, which will then be accounted as separate I/Os with regards to the rt_merge_iops limit. Thus, it is guaranteed that all the optimization activity will not generate more than (rt_merge_iops * rt_merge_maxiosize) bytes of disk I/O per second.
rt_merge_maxiosize = 1M
searchd stalls while rotating indexes with huge amounts of data to precache. Optional, default is 1 (enable seamless rotation). On Windows systems seamless rotation is disabled by default.
Indexes may contain some data that needs to be precached in RAM. At the moment,
.spm files are fully precached (they contain attribute data, blob attribute data, keyword index and killed row map, respectively.) Without seamless rotate, rotating an index tries to use as little RAM as possible and works as follows:
- new queries are temporarily rejected (with "retry" error code);
searchdwaits for all currently running queries to finish;
- old index is deallocated and its files are renamed;
- new index files are renamed and required RAM is allocated;
- new index attribute and dictionary data is preloaded to RAM;
searchdresumes serving queries from new index.
However, if there's a lot of attribute or dictionary data, then preloading step could take noticeable time - up to several minutes in case of preloading 1-5+ GB files.
With seamless rotate enabled, rotation works as follows:
- new index RAM storage is allocated;
- new index attribute and dictionary data is asynchronously preloaded to RAM;
- on success, old index is deallocated and both indexes' files are renamed;
- on failure, new index is deallocated;
- at any given moment, queries are served either from old or new index copy.
Seamless rotate comes at the cost of higher peak memory usage during the rotation (because both old and new copies of
.spa/.spb/.spi/.spm data need to be in RAM while preloading new copy). Average usage stays the same.
seamless_rotate = 1
Integer number that serves as server identificator used as seed to generate an unique short UUID for nodes that are part of a replication cluster. The server_id must be unique across the nodes of a cluster and in range from 0 to 127. If server_id is not set, MAC address or a random number will be used as seed for the short UUID.
server_id = 1
searchd -–stopwait waiting time, in seconds (or special_suffixes). Optional, default is 3 seconds.
When you run searchd –stopwait your server needs to perform some activities before stopping like finishing queries, flushing RT RAM chunk, flushing attributes and updating binlog. And it requires some time. searchd –stopwait will wait up to shutdown_time seconds for server to finish its jobs. Suitable time depends on your index size and load.
shutdown_timeout = 1m # wait for up to 60 seconds
SHA1 hash of the password which is necessary to invoke 'shutdown' command from VIP Manticore SQL connection. Without it debug 'shutdown' subcommand will never cause server's stop. Note, that such simple hashing should not be considered as a strong protection, as we don't use salted hash or any kind of modern hash function. That is just fool-proof for housekeeping daemons in a local network.
A prefix to prepend to the local file names when generating snippets. Optional, default is current working folder.
This prefix can be used in distributed snippets generation along with
Note how this is a prefix, and not a path! Meaning that if a prefix is set to "server1" and the request refers to "file23",
searchd will attempt to open "server1file23" (all of that without quotes). So if you need it to be a path, you have to mention the trailing slash.
After constructing final file path, server unwinds all relative dirs and compares final result with the value of
snippet_file_prefix. If result is not begin with the prefix, such file will be rejected with error message.
So, if you set it to '/mnt/data' and somebody calls snippet generation with file '../../../etc/passwd', as the source, it will get error message
File '/mnt/data/../../../etc/passwd' escapes '/mnt/data/' scope
instead of content of the file.
Also, with non-set parameter and reading '/etc/passwd' it will actually read /daemon/working/folder/etc/passwd since default for param is exactly server's working folder.
Note also that this is a local option, it does not affect the agents anyhow. So you can safely set a prefix on a master server. The requests routed to the agents will not be affected by the master's setting. They will however be affected by the agent's own settings.
This might be useful, for instance, when the document storage locations (be those local storage or NAS mountpoints) are inconsistent across the servers.
snippets_file_prefix = /mnt/common/server1/
WARNING: If you still want to access files from the FS root, you have to explicitly set
snippets_file_prefixto empty value (by
snippets_file_prefix=line), or to root (by
Path to a file where current SQL state will be serialized.
On server startup, this file gets replayed. On eligible state changes (eg. SET GLOBAL), this file gets rewritten automatically. This can prevent a hard-to-diagnose problem: If you load UDF functions, but Manticore crashes, when it gets (automatically) restarted, your UDF and global variables will no longer be available; using persistent state helps a graceful recovery with no such surprises.
sphinxql_state = uservars.sql
Maximum time to wait between requests (in seconds, or special_suffixes) when using SQL interface. Optional, default is 15 minutes.
sphinxql_timeout = 15m
Path to the SSL Certificate Authority (CA) certificate file (aka root certificate). Optional, default is empty. When not empty the certificate in
ssl_cert should be signed by this root certificate.
Server uses the CA file to verify the signature on the certificate. The file must be in PEM format.
ssl_ca = keys/ca-cert.pem
Path to the server's SSL certificate. Optional, default is empty.
Server uses this certificate as self-signed public key encrypting HTTP traffic over SSL. The file must be in PEM format.
ssl_cert = keys/server-cert.pem
Path to the SSL certificate key. Optional, default is empty.
Server uses this private key to encrypt HTTP traffic over SSL. The file must be in PEM format.
ssl_key = keys/server-key.pem
Max common subtree document cache size, per-query. Optional, default is 0 (disabled).
Limits RAM usage of a common subtree optimizer (see multi-queries). At most this much RAM will be spent to cache document entries per each query. Setting the limit to 0 disables the optimizer.
subtree_docs_cache = 8M
Max common subtree hit cache size, per-query. Optional, default is 0 (disabled).
Limits RAM usage of a common subtree optimizer (see multi-queries). At most this much RAM will be spent to cache keyword occurrences (hits) per each query. Setting the limit to 0 disables the optimizer.
subtree_hits_cache = 16M
Maximum stack size for a job (coroutine, one search query may cause multiple jobs/coroutines). Optional, default is unlimited.
Each job has it's own stack of 128K. When you run a query it's checked for how much stack it requires. If the default 128K is enough, it's just processed. If it needs more we schedule another job with increased stack, which continues processing. The maximum size of such advanced stack is limited by this setting.
Setting the value to a reasonably high rate will help with processing very deep queries without implication, that overall RAM consumption will grow too high. For example, setting it to 1G does not imply that every new job will take 1G of RAM, but if we see that it requires let's say 100M stack, we just allocate 100M for the job. Other jobs at the same time will be running with their default 128K stack. The same way we can run even more complex queries that need 500M. And if only if we see internally that the job requires more than 1G of stack we will fail and report about too low thread_stack.
However in practice even a query which needs 16M of stack is often too complex for parsing, and consumes too much time and resources to be processed. So, the daemon will process it, but limiting such queries by the thread_stack setting looks quite reasonable.
thread_stack = 8M
Whether to unlink .old index copies on successful rotation. Optional, default is 1 (do unlink).
unlink_old = 0
Threaded server watchdog. Optional, default is 1 (watchdog enabled).
When Manticore query crashes it can take down the entire server. With the watchdog feature enabled,
searchd additionally keeps a separate lightweight process that monitors the main server process, and automatically restarts the latter in case of abnormal termination. Watchdog is enabled by default.
watchdog = 0 # disable watchdog
Lemmatizer dictionaries base path. Optional, default is /usr/local/share (as in --datadir switch to ./configure script).
Our lemmatizer implementation (see Morphology for a discussion of what lemmatizers are) is dictionary driven. lemmatizer_base directive configures the base dictionary path. File names are hardcoded and specific to a given lemmatizer; the Russian lemmatizer uses ru.pak dictionary file. The dictionaries can be obtained from the Manticore website (https://manticoresearch.com/downloads/).
lemmatizer_base = /usr/local/share/sphinx/dicts/
Merge Real-Time index chunks during
OPTIMIZE operation from smaller to bigger. Progressive merge merger faster and reads/write less data. Enabled by default. If disabled, chunks are merged from first to last created.
Whether and how to auto-convert key names within JSON attributes. Known value is 'lowercase'. Optional, default value is unspecified (do not convert anything).
When this directive is set to 'lowercase', key names within JSON attributes will be automatically brought to lower case when indexing. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe2 sources will all be affected.
json_autoconv_keynames = lowercase
Automatically detect and convert possible JSON strings that represent numbers, into numeric attributes. Optional, default value is 0 (do not convert strings into numbers).
When this option is 1, values such as "1234" will be indexed as numbers instead of strings; if the option is 0, such values will be indexed as strings. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe2 sources will all be affected.
json_autoconv_numbers = 1
What to do if JSON format errors are found. Optional, default value is
ignore_attr (ignore errors). Applies only to
By default, JSON format errors are ignored (
ignore_attr) and the indexer tool will just show a warning. Setting this option to
fail_index will rather make indexing fail at the first JSON format error.
on_json_attr_error = ignore_attr
Trusted location for the dynamic libraries (UDFs). Optional, default is empty (no location).
Specifies the trusted directory from which the UDF libraries can be loaded.
plugin_dir = /usr/local/sphinx/lib