▪️ Logging
Query logging can be enabled by setting query_log
directive in searchd section of the configuration file
searchd {
...
query_log = /var/log/query.log
...
}
Queries can also be sent to syslog by setting syslog
instead of a file path.
In this case all search queries will be sent to syslog daemon with LOG_INFO
priority, prefixed with [query]
instead of timestamp. Only plain
log format is supported for syslog. To use the syslog option Manticore must be configured –-with-syslog
on building (official packages come with syslog support).
Two query log formats are supported. Plain text format is still the default one. However, while it might be more convenient for manual monitoring and review, but hard to replay for benchmarks, it only logs search queries but not the other types of requests, does not always contain the complete search query data, etc.
The default text format is also harder (and sometimes impossible) to replay for benchmarking purposes. The sphinxql
format alleviates that. It aims to be complete and re-playable, even though at the cost of brevity and readability.
By default, searchd
logs all successfully executed search queries into a query log file. Here's an example:
[Fri Jun 29 21:17:58 2007] 0.004 sec 0.004 sec [all/0/rel 35254 (0,20)] [lj] test
[Fri Jun 29 21:20:34 2007] 0.024 sec 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test
This log format is as follows:
[query-date] real-time wall-time [match-mode/filters-count/sort-mode total-matches (offset,limit) @groupby-attr] [index-name] query
- real-time is a time measured just from start to finish of the query
- wall-time like real-time but not including waiting for agents and merging result sets time
Match mode can take one of the following values:
- "all" for
SPH_MATCH_ALL
mode; - "any" for
SPH_MATCH_ANY
mode; - "phr" for
SPH_MATCH_PHRASE
mode; - "bool" for
SPH_MATCH_BOOLEAN
mode; - "ext" for
SPH_MATCH_EXTENDED
mode; - "ext2" for
SPH_MATCH_EXTENDED2
mode; - "scan" if the full scan mode was used, either by being specified with
SPH_MATCH_FULLSCAN
Sort mode can take one of the following values:
- "rel" for
SPH_SORT_RELEVANCE
mode; - "attr-" for
SPH_SORT_ATTR_DESC
mode; - "attr+" for
SPH_SORT_ATTR_ASC
mode; - "tsegs" for
SPH_SORT_TIME_SEGMENTS
mode; - "ext" for
SPH_SORT_EXTENDED
mode.
Note: the SPH* modes are specific to SphinxAPI legacy interface. SQL and HTTP interface will log in most cases ext2 for matching mode and ext and rel for sorting modes.
Additionally, if searchd
was started with --iostats
, there will be a block of data after where the index(es) searched are listed.
A query log entry might take the form of:
[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] [ios=6 kb=111.1 ms=0.5] test
This additional block is information regarding I/O operations in performing the search: the number of file I/O operations carried out, the amount of data in kilobytes read from the index files and time spent on I/O operations (although there is a background processing component, the bulk of this time is the I/O operation time).
SQL format can be enabled by searchd directive query_log_format
:
searchd {
...
query_log = /var/log/query.log
query_log_format = sphinxql
...
}
In this format, the example from the previous section would look as follows. (Wrapped below for readability, but with just one query per line in the actual log.)
/* Fri Jun 29 21:17:58.609 2007 2011 conn 2 real 0.004 wall 0.004 found 35254 */
SELECT * FROM test WHERE MATCH('test') OPTION ranker=proximity;
/* Fri Jun 29 21:20:34 2007.555 conn 3 real 0.024 wall 0.024 found 19886 */
SELECT * FROM test WHERE MATCH('test') GROUP BY channel_id
OPTION ranker=proximity;
Note that all requests would be logged in this format, including those sent via SphinxAPI and SphinxSE, not just those sent via SQL. Also note, that this kind of logging works only with plain log files and will not work if you use 'syslog' service for logging.
The features of Manticore SQL log format compared to the default text one are as follows.
- All request types should be logged. (This is still work in progress.)
- Full statement data will be logged where possible.
- Errors and warnings are logged.
- The log should be automatically re-playable via SphinxQL.
- Additional performance counters (currently, per-agent distributed query times) are logged.
Use sphinxql:compact_in
to shorten your IN()
clauses in log if you have too many values in it.
Every request (including both SphinxAPI and SQL) request must result in exactly one log line. All request types, including INSERT
, CALL SNIPPETS
, etc will eventually get logged, though as of time of this writing, that is a work in progress). Every log line must be a valid Manticore SQL statement that reconstructs the full request, except if the logged request is too big and needs shortening for performance reasons. Additional messages, counters, etc can be logged in the comments section after the request.
By default all queries are logged. If it's desired to log only queries with execution times that exceed the specified
limit, the query_log_min_msec
directive can be used:
searchd {
...
query_log = /var/log/query.log
query_log_min_msec = 1000
...
}
The expected unit of measure is milliseconds, but time suffix expressions can be used as well, like
searchd {
...
query_log = /var/log/query.log
query_log_min_msec = 1s
...
}
By default the searchd and query log files are created with 600 permission, so only the user under which server runs and root users can read the log files. query_log_mode
allows settings a different permission. This can be handy to allow other users to be able to read the log files (for example monitoring solutions running on non-root users).
searchd {
...
query_log = /var/log/query.log
query_log_mode = 666
...
}
By default, searchd will try to log all runtime events in a searchd.log
file in current working directory.
The path to a specific log file can be set with log
directive in searchd
section of the configuration file.
searchd {
...
log = /var/log/searchd.log
...
}
Also you can use the 'syslog' as the file name. In this case the events will be sent to syslog daemon. To use the syslog option Manticore must be configured --with-syslog
on building (official packages come with syslog support).
Binary logs are essentially a recovery mechanism for Real-Time index data and also of attributes updates of plain indexes that would otherwise only be stored in RAM until flush. With binary logs enabled, searchd
writes every given transaction to the binlog file, and uses that for recovery after an unclean shutdown. On clean shutdown, RAM chunks are saved to disk, and then all the binlog files are unlinked.
In RT mode binary logging is enabled by default and the binary log files are written inside the data_dir
folder.
Binary logging can be disabled by setting binlog_path
to empty:
searchd {
...
binlog_path = # disable logging
...
Disabling binary logging improves performance for Real-Time indexes, but puts their data at risk.
The directive can be used to set a custom path:
searchd {
...
binlog_path = /var/data
...
In Plain mode it is recommended to be explicitly defined. Otherwise, the default path, which in most cases is the same as working folder, may point to the folder with no write access (for example, /usr/local/var/data). In this case, the searchd will not start at all.
When logging is enabled, every transaction committed into RT index gets written into a log file. Logs are then automatically replayed on startup after an unclean shutdown, recovering the logged changes.
During normal operation, a new binlog file will be opened every time when binlog_max_log_size
limit is reached. Older, already closed binlog files are kept until all of the transactions stored in them (from all indexes) are flushed as a disk chunk. Setting the limit to 0 pretty much prevents binlog from being unlinked at all while searchd
is running; however, it will still be unlinked on clean shutdown. By default, there is no limit of the log file size.
binlog_max_log_size = 16M
There are 3 different binlog flushing strategies, controlled by directive binlog_flush
:
- 0, flush and sync every second. Best performance, but up to 1 second worth of committed transactions can be lost both on server crash, or OS/hardware crash.
- 1, flush and sync every transaction. Worst performance, but every committed transaction data is guaranteed to be saved.
- 2, flush every transaction, sync every second. Good performance, and every committed transaction is guaranteed to be saved in case of server crash. However, in case of OS/hardware crash up to 1 second worth of committed transactions can be lost.
For those familiar with MySQL and InnoDB, this directive is entirely similar to innodb_flush_log_at_trx_commit. Default mode is flush every transaction, sync every second (mode 2).
searchd {
...
binlog_flush = 1 # ultimate safety, low speed
...
}
On recovery after an unclean shutdown, binlogs are replayed and all logged transactions since the last good on-disk state are restored. Transactions are checksummed so in case of binlog file corruption garbage data will not be replayed; such a broken transaction will be detected and will stop replay. Transactions also start with a magic marker and timestamped, so in case of binlog damage in the middle of the file, it is technically possible to skip broken transactions and keep replaying from the next good one, and/or it is possible to replay transactions until a given timestamp (point-in-time recovery), but none of that is implemented yet.
Intensive updating of a small RT index that fully fits into a RAM chunk will lead to an ever-growing binlog that can never be unlinked until clean shutdown. Binlogs are essentially append-only deltas against the last known good saved state on disk, and unless RAM chunk gets saved, they can not be unlinked. An ever-growing binlog is not very good for disk use and crash recovery time. To avoid this, you can configure searchd
to perform a periodic RAM chunk flush to fix that problem using rt_flush_period
directive. With periodic flushes enabled, searchd
will keep a separate thread, checking whether RT indexes RAM chunks need to be written back to disk. Once that happens, the respective binlogs can be (and are) safely unlinked.
searchd {
...
rt_flush_period = 3600 # 1 hour
...
}
By default the RT flush period is set to 10 hours.
Note that rt_flush_period
only controls the frequency at which the checks happen. There are no guarantees that the particular RAM chunk will get saved. For instance, it does not make sense to regularly re-save a huge RAM chunk that only gets a few rows worth of updates. The search server determine whether to actually perform the flush with a few heuristics.