Query logging can be enabled by setting query_log
directive in searchd section of the configuration file
searchd {
...
query_log = /var/log/query.log
...
}
Queries can also be sent to syslog by setting syslog
instead of a file path.
In this case all search queries will be sent to syslog daemon with LOG_INFO
priority, prefixed with [query]
instead of timestamp. Only plain
log format is supported for syslog.
Two query log formats are supported:
plain
: recommended if most of your queries are purely full-text or you just don't care about non-full text components of your queries: filtering by attributes, sorting, grouping etc. You can't replay queries logged inplain
format.sphinxql
: recommended in all other cases (default). It also provides an easy way to replay logged queries.
With plain
log format Manticore logs all successfully executed search queries into a query log file. Here's an example:
[Fri Jun 29 21:17:58 2007] 0.004 sec 0.004 sec [all/0/rel 35254 (0,20)] [lj] test
[Fri Jun 29 21:20:34 2007] 0.024 sec 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test
The log format is as follows:
[query-date] real-time wall-time [match-mode/filters-count/sort-mode total-matches (offset,limit) @groupby-attr] [table-name] query
- real-time is the time from start till finish of the query
- wall-time is like the real-time, excluding time spent on waiting for agents and merging result sets from them
match-mode
can have one of the following values:
- "all" for
SPH_MATCH_ALL
mode; - "any" for
SPH_MATCH_ANY
mode; - "phr" for
SPH_MATCH_PHRASE
mode; - "bool" for
SPH_MATCH_BOOLEAN
mode; - "ext" for
SPH_MATCH_EXTENDED
mode; - "ext2" for
SPH_MATCH_EXTENDED2
mode; - "scan" if the full scan mode was used, either by being specified with
SPH_MATCH_FULLSCAN
or if the query was empty.
sort-mode
can have one of the following values:
- "rel" for
SPH_SORT_RELEVANCE
mode; - "attr-" for
SPH_SORT_ATTR_DESC
mode; - "attr+" for
SPH_SORT_ATTR_ASC
mode; - "tsegs" for
SPH_SORT_TIME_SEGMENTS
mode; - "ext" for
SPH_SORT_EXTENDED
mode.
Note: the SPH*
modes are specific to sphinx
legacy interface. SQL and HTTP interfaces will log in most cases ext2
as match-mode
and ext
and rel
as sort-mode
.
If Manticore was started with --iostats
(ot it was enabled via SET GLOBAL iostats=1
) the corresponding metrics will be included in the log. Then a query log entry might take the form of:
[Fri Jun 29 21:17:58 2021] 0.004 sec [all/0/rel 35254 (0,20)] [lj] [ios=6 kb=111.1 ms=0.5] test
where:
ios
- number of file I/O operations carried out;kb
- amount of data in kilobytes read from the table files;ioms
- time spent on I/O operations.
If Manticore was started with --cpustats
(ot it was enabled via SET GLOBAL cpustats=1
) metric cpums
will be included in the log. The query log will then look so:
[Fri Jun 29 21:17:58 2021] 0.004 sec [all/0/rel 35254 (0,20)] [lj] [ios=6 kb=111.1 ms=0.5 cpums=0.3] test
where cpums
is time in milliseconds spent on CPU processing the query.
SQL format is default, but can be changed by searchd setting query_log_format:
searchd {
...
query_log = /var/log/query.log
query_log_format = sphinxql
...
}
In this format, the example from the previous section would look as follows. (Wrapped below for readability, but with just one query per line in the actual log.)
/* Fri Jun 29 21:17:58.609 2007 2011 conn 2 real 0.004 wall 0.004 found 35254 */
SELECT * FROM test WHERE MATCH('test') OPTION ranker=proximity;
/* Fri Jun 29 21:20:34 2007.555 conn 3 real 0.024 wall 0.024 found 19886 */
SELECT * FROM test WHERE MATCH('test') GROUP BY channel_id OPTION ranker=proximity;
Note that all requests would be logged in this format, including those sent via SphinxAPI and SphinxSE, not just those sent via SQL. Also note, that this kind of logging works only with plain log files and will not work if you use 'syslog' service for logging.
The features of Manticore SQL log format compared to the plain text one are as follows:
- Full statement data will be logged where possible.
- Errors and warnings are logged.
- Query log can be replayedL.
- Additional performance counters (currently, per-agent distributed query times) are logged.
Use sphinxql:compact_in
to shorten your IN()
clauses in log if you have too many values in it.
Every request (including both SphinxAPI and SQL) request results in exactly one log line. Every log line must be a valid Manticore SQL statement that reconstructs the full request, except if the logged request is too big and needs shortening for performance reasons. Additional messages, counters, etc. can be logged in the comments section after the request.
By default all queries are logged. If it's desired to log only queries with execution times that exceed the specified limit, the query_log_min_msec
directive can be used:
searchd {
...
query_log = /var/log/query.log
query_log_min_msec = 1000
...
}
The expected unit of measurerement is milliseconds, but time suffix expressions can be used as well, like:
searchd {
...
query_log = /var/log/query.log
query_log_min_msec = 1s
...
}
By default the searchd and query log files are created with permission 600
, so only the user under which Manticore is running and root
can read the log files. query_log_mode
allows setting a different permission. This can be handy to allow other users to be able to read the log files (for example monitoring solutions running on non-root users).
searchd {
...
query_log = /var/log/query.log
query_log_mode = 666
...
}
By default, Manticore search daemon will log all runtime events in a searchd.log
file in the directory the searchd was started from. Normally in Linux you can find the log in /var/log/manticore/searchd.log
.
The log file path/name can be overriden via setting log
in section searchd
of the configuration file.
searchd {
...
log = /custom/path/to/searchd.log
...
}
- Also you can use
syslog
as the file name. In this case the events will be sent to your server's syslog daemon. - In some cases you might want to use
/dev/stdout
as the file name. In this case in Linux Manticore will just output the events. It can be useful in Docker/Kubernetes environments.
Binary logging is a recovery mechanism for Real-Time table data and also for attributes updates of plain tables that would otherwise only be stored in RAM until flush. With binary logs enabled, searchd
writes every given transaction to the binlog file, and uses that for recovery after an unclean shutdown. On clean shutdown, RAM chunks are saved to disk, and then all the binlog files are unlinked.
Binary logging is enabled by default. The default location for binlog.*
files in Linux is /var/lib/manticore/data/
.
In the RT mode the binary logs are saved in the data_dir
folder, unless specifed differently.
Binary logging can be disabled by setting binlog_path
to empty:
searchd {
...
binlog_path = # disable logging
...
Disabling binary logging improves performance for Real-Time tables, but puts their data at risk.
The directive can be used to set a custom path:
searchd {
...
binlog_path = /var/data
...
When logging is enabled, every transaction committed into RT table gets written into a log file. Logs are then automatically replayed on startup after an unclean shutdown, recovering the logged changes.
During normal operation, a new binlog file will be opened every time when binlog_max_log_size
limit is reached. Older, already closed binlog files are kept until all of the transactions stored in them (from all tables) are flushed as a disk chunk. Setting the limit to 0 pretty much prevents binlog from being unlinked at all while searchd
is running; however, it will still be unlinked on clean shutdown. By default, there is no limit of the log file size.
binlog_max_log_size = 16M
There are 3 different binlog flushing strategies, controlled by directive binlog_flush
:
- 0, flush and sync every second. Best performance, but up to 1 second worth of committed transactions can be lost both on server crash, or OS/hardware crash.
- 1, flush and sync every transaction. Worst performance, but every committed transaction data is guaranteed to be saved.
- 2, flush every transaction, sync every second. Good performance, and every committed transaction is guaranteed to be saved in case of server crash. However, in case of OS/hardware crash up to 1 second worth of committed transactions can be lost.
Default mode is flush every transaction, sync every second (mode 2).
searchd {
...
binlog_flush = 1 # ultimate safety, low speed
...
}
On recovery after an unclean shutdown, binlogs are replayed and all logged transactions since the last good on-disk state are restored. Transactions are checksummed so in case of binlog file corruption garbage data will not be replayed; such a broken transaction will be detected and will stop replay. Transactions also start with a magic marker and timestamped, so in case of binlog damage in the middle of the file, it is technically possible to skip broken transactions and keep replaying from the next good one, and/or it is possible to replay transactions until a given timestamp (point-in-time recovery), but none of that is implemented yet.
Intensive updating of a small RT table that fully fits into a RAM chunk will lead to an ever-growing binlog that can never be unlinked until clean shutdown. Binlogs are essentially append-only deltas against the last known good saved state on disk, and unless RAM chunk gets saved, they can not be unlinked. An ever-growing binlog is not very good for disk use and crash recovery time. To avoid this, you can configure searchd to perform a periodic RAM chunk flush to fix that problem using
rt_flush_perioddirective. With periodic flushes enabled,
searchd` will keep a separate thread, checking whether RT tables RAM chunks need to be written back to disk. Once that happens, the respective binlogs can be (and are) safely unlinked.
searchd {
...
rt_flush_period = 3600 # 1 hour
...
}
By default the RT flush period is set to 10 hours.
Note that `rt_flush_period only controls the frequency at which the checks happen. There are no guarantees that the particular RAM chunk will get saved. For instance, it does not make sense to regularly re-save a huge RAM chunk that only gets a few rows worth of updates. Manticore determines automatically whether to actually perform the flush with a few heuristics.