Search options

SQL SELECT clause supports a number of options that can be used to fine-tune search behaviour.

OPTION

SELECT ... OPTION <optionname>=<value> [ , ... ]

Example:

SELECT * FROM test WHERE MATCH('@title hello @body world')
OPTION ranker=bm25, max_matches=3000,
    field_weights=(title=10, body=3), agent_query_timeout=10000

Supported options and respectively allowed values are:

agent_query_timeout

Integer. Max time in milliseconds to wait for remote queries to complete, see this section.

boolean_simplify

0 or 1, enables simplifying the query to speed it up

comment

String, user comment that gets copied to a query log file

cutoff

Integer. Max found matches threshold.

field_weights

Named integer list (per-field user weights for ranking)

global_idf

Use global statistics (frequencies) from the global_idf file for IDF computations.

idf

Quoted, comma-separated list of IDF computation flags. Known flags are:

  • normalized: BM25 variant, idf = log((N-n+1)/n), as per Robertson et al
  • plain: plain variant, idf = log(N/n), as per Sparck-Jones
  • tfidf_normalized: additionally divide IDF by query word count, so that TF*IDF fits into [0, 1] range
  • tfidf_unnormalized: do not additionally divide IDF by query word count where N is the collection size and n is the number of matched documents

The historically default IDF (Inverse Document Frequency) in Manticore is equivalent to OPTION idf='normalized,tfidf_normalized', and those normalizations may cause several undesired effects.

First, idf=normalized causes keyword penalization. For instance, if you search for the | something and the occurs in more than 50% of the documents, then documents with both keywords the and something will get less weight than documents with just one keyword something. Using OPTION idf=plain avoids this. Plain IDF varies in [0, log(N)] range, and keywords are never penalized; while the normalized IDF varies in [-log(N), log(N)] range, and too frequent keywords are penalized.

Second, idf=tfidf_normalized causes IDF drift over queries. Historically, we additionally divided IDF by query keyword count, so that the entire sum(tf*idf) over all keywords would still fit into [0,1] range. However, that means that queries word1 and word1 | nonmatchingword2 would assign different weights to the exactly same result set, because the IDFs for both word1 and nonmatchingword2 would be divided by 2. OPTION idf='tfidf_unnormalized' fixes that. Note that BM25, BM25A, BM25F() ranking factors will be scale accordingly once you disable this normalization.

IDF flags can be mixed; plain and normalized are mutually exclusive; tfidf_unnormalized and tfidf_normalized are mutually exclusive; and unspecified flags in such a mutually exclusive group take their defaults. That means that OPTION idf=plain is equivalent to a complete OPTION idf='plain,tfidf_normalized' specification.

local_df

0 or 1,automatically sum DFs over all the local parts of a distributed index, so that the IDF is consistent (and precise) over a locally sharded index.

index_weights

Named integer list. Per-index user weights for ranking.

max_matches

Integer. Per-query max matches value.

Maximum amount of matches that the server keeps in RAM for each index and can return to the client. Default is 1000.

Introduced in order to control and limit RAM usage, max_matches setting defines how much matches will be kept in RAM while searching each index. Every match found will still be processed; but only best N of them will be kept in memory and return to the client in the end. Assume that the index contains 2,000,000 matches for the query. You rarely (if ever) need to retrieve all of them. Rather, you need to scan all of them, but only choose “best” at most, say, 500 by some criteria (ie. sorted by relevance, or price, or anything else), and display those 500 matches to the end user in pages of 20 to 100 matches. And tracking only the best 500 matches is much more RAM and CPU efficient than keeping all 2,000,000 matches, sorting them, and then discarding everything but the first 20 needed to display the search results page. max_matches controls N in that "best N" amount.

This parameter noticeably affects per-query RAM and CPU usage. Values of 1,000 to 10,000 are generally fine, but higher limits must be used with care. Recklessly raising max_matches to 1,000,000 means that searchd will have to allocate and initialize 1-million-entry matches buffer for every query. That will obviously increase per-query RAM usage, and in some cases can also noticeably impact performance.

max_query_time

Sets maximum search query time, in milliseconds. Must be a non-negative integer. Default value is 0 which means "do not limit". Local search queries will be stopped once that much time has elapsed. Note that if you're performing a search which queries several local indexes, this limit applies to each index separately. Note it may increase the query's response time a little bit, the overhead is caused by constant tracking if it's time to stop the query.

max_predicted_time

Integer. Max predicted search time, see predicted_time_costs.

ranker

Any of:

  • proximity_bm25
  • bm25
  • none
  • wordcount
  • proximity
  • matchany
  • fieldmask
  • sph04
  • expr
  • or export

Refer to Search results ranking for more details on each ranker.

retry_count

Integer. Distributed retries count.

retry_delay

Integer. Distributed retry delay, msec.

reverse_scan

0 or 1, lets you control the order in which full-scan query processes the rows.

sort_method

  • pq - priority queue, set by default
  • kbuffer - gives faster sorting for already pre-sorted data, e.g. index data sorted by id The result set is in both cases the same; picking one option or the other may just improve (or worsen!) performance.

threads

Limits max number of threads to use for current query processing. Default - no limit (the query can occupy all threads as defined globally). For batch of queries the option must be attached to the very first query in the batch, and it is then applied when working queue is created and then is effective for the whole batch. This option has same meaning as option max_threads_per_query, but applied only to the current query or batch of queries.

rand_seed

Lets you specify a specific integer seed value for an ORDER BY RAND() query, for example: ... OPTION rand_seed=1234. By default, a new and different seed value is autogenerated for every query

low_priority

Runs the query with low priority in terms of Linux CPU scheduling. Consider also OPTION threads=1 instead, or use that together with low_priority, as it might be better in some use cases.

expand_keywords

0 or 1, expand keywords with exact forms and/or stars when possible. Refer to expand_keywords for more details.

token_filter

Quoted, colon-separated of library name:plugin name:optional string of settings. Query-time token filter gets created on search each time full-text invoked by every index involved and let you implement a custom tokenizer that makes tokens according to custom rules.

SELECT * FROM index WHERE MATCH ('yes@no') OPTION token_filter='mylib.so:blend:@'

morphology

none allows to replace all query terms with their exact forms if index was built with index_exact_words enabled. Useful to prevent stemming or lemmatizing query terms.

Highlighting

Highlighting allows you to get highlighted text fragments (called snippets) from documents that contain matching keywords.

SQL's HIGHLIGHT() function, "highlight" property in json queries via HTTP and highlight() function in the PHP client all use built-in document storage for retrieving original field contents (enabled by default).

📋
SELECT HIGHLIGHT() FROM books WHERE MATCH('try');
Response
+----------------------------------------------------------+
| highlight()                                              |
+----------------------------------------------------------+
| Don`t <strong>try</strong> to compete in childishness, said Bliss. |
+----------------------------------------------------------+
1 row in set (0.00 sec)

When using SQL to highlight search results, you get different snippets from different fields concatenated as a single string. It is a limitation of mysql protocol. You can fine-tune concatenation separators with field_separator and snippet_separator options, see below.

When running json queries via HTTP or using the PHP client, there are no such limitations and the result set contains an array of fields which contains arrays of snippets (without the separators).

Note that snippet generation options such as limit, limit_words, limit_snippets are applied to each field separately (by default). You can change this behavior using the limits_per_field option, but it may lead to undesirable results. I.e. one of the fields has matching keywords, but no snippets from this field are included in the result set because they didn't rank as high as the snippets from the other fields in the highlighting engine.

Highlighting algorithm currently favors better snippets (with closer phrase matches), and then snippets with keywords not yet included in the result. Generally, it will try to highlight the best match with the query, and it will also try to highlight all the query keywords, as made possible by the limits. If there are no matches in the current field, the beginning of the document trimmed down according to the limits will be return by default. You can also return an empty string instead by setting allow_empty option to 1.

Highlighting is performed on a so-called post limit stage, meaning that snippet generation is postponed not just until the entire final result set is ready, but even after the LIMIT clause is applied. For example, with a LIMIT 20,10 clause, HIGHLIGHT() function will be called at most 10 times.

Highlighting options

There are several additional optional highlighting options that can be used to fine-tune snippet generation. Most of them are common to SQL, HTTP and PHP client.

before_match

A string to insert before a keyword match. A %SNIPPET_ID% macro can be used in this string. The first match of the macro is replaced with an incrementing snippet number within a current snippet. Numbering starts at 1 by default but can be overridden with start_snippet_id option. %SNIPPET_ID% restarts at the start of every new document. Default is <strong>.

after_match

A string to insert after a keyword match. Default is </strong>.

limit

Maximum snippet size, in symbols (codepoints). Default is 256. Per-field by default, see limits_per_field.

limit_words

Limits the maximum number of words that can be included in the result. Note the limit applies to any words, and not just the matched keywords to highlight. For example, if we are highlighting Mary and a snippet Mary had a little lamb is selected, then it contributes 5 words to this limit, not just 1. Default is 0 (no limit). Per-field by default, see limits_per_field.

limit_snippets

Limits the maximum number of snippets that can be included in the result. Default is 0 (no limit). Per-field by default, see limits_per_field.

limits_per_field

Selects whether limit, limit_words and limit_snippets work as individual limits in every field of the document being highlighted or as global limits for the whole document. Setting this option to 0 means that all combined highlighting results for one document must be within the specified limits. The downside is that you may get several snippets highlighted in one field and none in another if the highlighting engine decides that they are more relevant. Default is 1 (use per-field limits).

around

How much words to pick around each matching keywords block. Default is 5.

use_boundaries

Whether to additionally break snippets by phrase boundary characters, as configured in index settings with phrase_boundary directive. Default is 0 (don't use boundaries).

weight_order

Whether to sort the extracted snippets in order of relevance (decreasing weight), or in order of appearance in the document (increasing position). Default is 0 (don't use weight order).

force_all_words

Ignores length limit until the result includes all the keywords. Default is 0 (don't force all keywords).

start_snippet_id

Specifies the starting value of %SNIPPET_ID% macro (that gets detected and expanded in before_match, after_match strings). Default is 1.

html_strip_mode

HTML stripping mode setting. Defaults to index, which means that index settings will be used. The other values are none and strip, that forcibly skip or apply stripping irregardless of index settings; and retain, that retains HTML markup and protects it from highlighting. The retain mode can only be used when highlighting full documents and thus requires that no snippet size limits are set. String, allowed values are none, strip, index, and retain.

allow_empty

Allows empty string to be returned as highlighting result when no snippets could be generated in the current field (no keywords match, or no snippets fit the limit). By default, the beginning of original text would be returned instead of an empty string. Default is 0 (don't allow empty result).

snippet_boundary

Ensures that snippets do not cross a sentence, paragraph, or zone boundary (when used with an index that has the respective indexing settings enabled). String, allowed values are sentence, paragraph, and zone.

emit_zones

Emits an HTML tag with an enclosing zone name before each snippet. Default is 0 (don't emit zone names).

force_snippets

Whether to force snippet generation even if limits allow to highlight whole text. Default is 0 (don't force snippet generation).

📋
SELECT HIGHLIGHT({limit=50}) FROM books WHERE MATCH('try|gets|down|said');
Response
+---------------------------------------------------------------------------+
| highlight({limit=50})                                                     |
+---------------------------------------------------------------------------+
|  ... , "It <strong>gets</strong> infantile pleasure  ...  to knock it <strong>down</strong>." |
| Don`t <strong>try</strong> to compete in childishness, <strong>said</strong> Bliss.           |
|  ...  a small room. Bander <strong>said</strong>, "Come, half-humans, I ...         |
+---------------------------------------------------------------------------+
3 rows in set (0.00 sec)

Highlighting via SQL

HIGHLIGHT() function can be used to highlight search results. Here's the syntax:

HIGHLIGHT([options], [field_list], [query] )

By default, it works with no arguments.

SQL
📋
SELECT HIGHLIGHT() FROM books WHERE MATCH('before');
Response
+-----------------------------------------------------------+
| highlight()                                               |
+-----------------------------------------------------------+
| A door opened <strong>before</strong> them, revealing a small room. |
+-----------------------------------------------------------+
1 row in set (0.00 sec)

HIGHLIGHT() fetches all available full-text fields from document storage and highlights them against the given query. It supports field syntax in queries. Field text is separated by field_separator, which can be changed in the options.

SQL
📋
SELECT HIGHLIGHT() FROM books WHERE MATCH('@title one');
Response
+-----------------+
| highlight()     |
+-----------------+
| Book <strong>one</strong> |
+-----------------+
1 row in set (0.00 sec)

Optional first argument in HIGHLIGHT() is the list of options.

SQL
📋
SELECT HIGHLIGHT({before_match='[match]',after_match='[/match]'}) FROM books WHERE MATCH('@title one');
Response
+------------------------------------------------------------+
| highlight({before_match='[match]',after_match='[/match]'}) |
+------------------------------------------------------------+
| Book [match]one[/match]                                    |
+------------------------------------------------------------+
1 row in set (0.00 sec)

Optional second argument is a string containing a field or a comma-separated list of fields. If this argument is present, only the specified fields will be fetched from document storage and highlighted. An empty string as a second argument means "fetch all available fields".

SQL
📋
SELECT HIGHLIGHT({},'title,content') FROM books WHERE MATCH('one|robots');
Response
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| highlight({},'title,content')                                                                                                                                                         |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Book <strong>one</strong> | They followed Bander. The <strong>robots</strong> remained at a polite distance, but their presence was a constantly felt threat.                                             |
| Bander ushered all three into the room. <strong>One</strong> of the <strong>robots</strong> followed as well. Bander gestured the other <strong>robots</strong> away and entered itself. The door closed behind it. |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

Another way to use the second argument is to specify string attribute or field name without quotes. This way the supplied string will be highlighted against the provided query, however, field syntax will be ignored.

SQL
📋
SELECT HIGHLIGHT({}, title) FROM books WHERE MATCH('one');
Response
+---------------------+
| highlight({},title) |
+---------------------+
| Book <strong>one</strong>     |
| Book five           |
+---------------------+
2 rows in set (0.00 sec)

Optional third argument is the query. It is used to highlight search results against a query different than the one used for searching.

SQL
📋
SELECT HIGHLIGHT({},'title', 'five') FROM books WHERE MATCH('one');
Response
+-------------------------------+
| highlight({},'title', 'five') |
+-------------------------------+
| Book one                      |
| Book <strong>five</strong>              |
+-------------------------------+
2 rows in set (0.00 sec)

While HIGHLIGHT() is designed to work with stored full-text fields and string attributes, it can also be used to highlight arbitrary text. Note that if the query has any field search operators (@title hello @body world), the field part of them is ignored in this case.

SQL
📋
SELECT HIGHLIGHT({},TO_STRING('some text to highlight'), 'highlight') FROM books WHERE MATCH('@title one');
Response
+----------------------------------------------------------------+
| highlight({},TO_STRING('some text to highlight'), 'highlight') |
+----------------------------------------------------------------+
| some text to <strong>highlight</strong>                                  |
+----------------------------------------------------------------+
1 row in set (0.00 sec)

Several options make sense only when generating a single string as a result (not an array of snippets). This only applies to SQL's HIGHLIGHT() function:

snippet_separator

A string to insert between snippets. Default is ....

field_separator

A string to insert between fields. Default is |.

Another way to highlight text is to use the CALL SNIPPETS statement. It mostly duplicates HIGHLIGHT() functionality, but it can't use built-in document storage. It can, however, load source text from files.

Highlighting via HTTP

To highlight full-text search results in JSON queries via HTTP, field contents has to be stored in document storage (enabled by default). In the example full-text fields content and title are fetched from document storage and highlighted against the query specified in query clause.

Highlighted snippets are returned in the highlight property of the hits array.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": ["content"]
  }
}
Response
{
  "took":1,
  "timed_out":false,
  "hits":
  {
    "total":1,
    "hits":
    [
      {
        "_id":"5",
        "_score":1602,
        "_source":
        {
          "title":"Book five",
          "content":"Bander ushered all three into the room. One of the robots followed as well. Bander gestured the other robots away and entered itself. The door closed behind it."
        },
        "highlight":
        {
          "content":
          [
            "Bander ushered all three into the room. One of the robots followed as well. Bander gestured the other robots away <strong>and</strong> entered itself. The door closed behind it."
          ]
        }
      }
    ]
  }
}

To highlight all possible fields, pass an empty object as highlight propery.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight": {}
}
Response
{
  "took":1,
  "timed_out":false,
  "hits":
  {
    "total":1,
    "hits":
    [
      {
        "_id":"5",
        "_score":1602,
        "_source":
        {
          "title":"Book five",
          "content":"Bander ushered all three into the room. One of the robots followed as well. Bander gestured the other robots away and entered itself. The door closed behind it."
        },
        "highlight":
        {
          "title":
          [
            "Book five"
          ],
          "content":
          [
            "Bander ushered all three into the room. One of the robots followed as well. Bander gestured the other robots away <strong>and</strong> entered itself. The door closed behind it."
          ]
        }
      }
    ]
  }
}

In addition to common highlighting options, several synonyms are available for JSON queries via HTTP:

fields

fields object contains attribute names with options. It can also be an array of field names (without any options).

encoder

encoder can be set to default or html. When set to html, retains html markup when highlighting. Works similar to html_strip_mode=retain option.

highlight_query

highlight_query makes it possible to highlight against a query other than our search query. Syntax is the same as in the main query.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": [ "content", "title" ],
    "highlight_query": { "match": { "*":"into three five" } }
   }
}

pre_tags and post_tags

pre_tags and post_tags set opening and closing tags for highlighted text snippets. They work similar to before_match and after_match options. Optional, defaults are <strong> and </strong>.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": [ "content", "title" ],
    "pre_tags": "before_",
    "post_tags": "_after"
   }
}

no_match_size

no_match_size works similar to the allow_empty option. If set to 0, acts as allow_empty=1, i.e. allows empty string to be returned as highlighting result when a snippet could not be generated. Otherwise, the beginning of the field will be returned. Optional, default is 1.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": [ "content", "title" ],
    "no_match_size": 0
  }
}

order

order sets the sorting order of extracted snippets. If set to "score", sorts the extracted snippets in order of relevance. Optional. Works similar to weight_order option.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": [ "content", "title" ],
    "order": "score"
  }
}

fragment_size

fragment_size sets maximum snippet size in symbols. Can be global or per-field. Per-field options override global options. Optional, default is 256. Works similar to limit option.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": [ "content", "title" ],
    "fragment_size": 100
  }
}

number_of_fragments

number_of_fragments: Limits the maximum number of snippets in the result. Just as fragment_size, can be global or per-field. Optional, default is 0 (no limit). Works similar to limit_snippets option.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
  "highlight":
  {
    "fields": [ "content", "title" ],
    "number_of_fragments": 10
  }
}

limit, limit_words, limit_snippets

Options such as limit, limit_words, and limit_snippets can be set as global or per-field options. Global options are used as per-field limits unless per-field options override them. In the example the title field is highlighted with default limit settings while the content field uses a different limit.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
      "highlight":
      {
        "fields":
        {
            "title": {},
            "content" : { "limit": 50 }
        }
      }
}

limits_per_field

Global limits can also be forced by specifying limits_per_field=0. Setting this option means that all combined highlighting results must be within the specified limits. The downside is that you may get several snippets highlighted in one field and none in another if the highlighting engine decides that they are more relevant.

HTTP
📋
POST /search
{
  "index": "books",
  "query": { "match": { "content": "and first" } },
      "highlight":
      {
        "fields":
        {
            "limits_per_field": 0,
            "content" : { "limit": 50 }
        }
      }
}

CALL SNIPPETS

CALL SNIPPETS statement builds a snippet from provided data and query using specified index settings. It can't access built-in document storage, that's why it's recommended to use HIGHLIGHT() function instead.

The syntax is:

CALL SNIPPETS(data, index, query[, opt_value AS opt_name[, ...]])

data

data is the source data to extract a snippet from. It can be a single string, or the list of the strings enclosed in curly brackets.

index

index is the name of the index from which to take the text processing settings.

query

query is the full-text query to build snippets for.

opt_value and opt_name

opt_value and opt_name are snippet generation options

SQL
📋
CALL SNIPPETS(('this is my document text','this is my another text'), 'forum', 'is text', 5 AS around, 200 AS limit);
Response
+----------------------------------------+
| snippet                                |
+----------------------------------------+
| this <strong>is</strong> my document <strong>text</strong> |
| this <strong>is</strong> my another <strong>text</strong>  |
+----------------------------------------+
2 rows in set (0.02 sec)

Most options are the same as in the HIGHLIGHT() function. There are, however, several options that can only be used with CALL SNIPPETS. The following options can be used to highlight text stored in separate files:

load_files

Whether to handle the first argument as data to extract snippets from (default behavior), or to treat it as file names, and load data from specified files on the server side. Up to max_threads_per_query worker threads per request will be used to parallelize the work when this flag is enabled. Default is 0 (no limit). To distribute snippet generation between remote agents invoke snippets generation in a distributed index, that contains only one(!) local agent and several remotes. The snippets_file_prefix option is used to generate the final file name. E.g. when searchd is configured with snippets_file_prefix = /var/data_ and text.txt is provided as a file name, snippets will be generated from the content of /var/data_text.txt.

load_files_scattered

Works only with distributed snippets generation with remote agents. Source files for snippet generation can be distributed among different agents and the main server will merge all non-erroneous results. E.g. if one agent of the distributed index has file1.txt, another agent has file2.txt and you use CALL SNIPPETS with both of these files, searchd will merge agent results, so you will get results from both file1.txt and file2.txt. Default is 0.

If load_files options is also enabled, request will return an error if any of the files is not available anywhere. Otherwise (if load_files is not enabled) it will just return empty strings for all absent files. Searchd does not pass this flag to agents, so agents do not generate a critical error if the file does not exist. If you want to be sure that all source files are loaded, set both load_files_scattered and load_files to 1. If the absence of some source files on some agent is not critical, set only load_files_scattered to 1.

SQL
📋
CALL SNIPPETS(('data/doc1.txt','data/doc2.txt'), 'forum', 'is text', 1 AS load_files);
Response
+----------------------------------------+
| snippet                                |
+----------------------------------------+
| this <strong>is</strong> my document <strong>text</strong> |
| this <strong>is</strong> my another <strong>text</strong>  |
+----------------------------------------+
2 rows in set (0.02 sec)

Sorting and ranking

Query returns matches sorted. By default (if nothing specified) they're sorted by relevance, which is equivalent to "ORDER BY weight() DESC, id ASC".

Currently the following result sorting modes are available:

  • default mode, that sorts by relevance in descending order (best matches first)
  • extended mode, that sorts by combination of columns in ASC/DESC order

Extended mode

ORDER BY weight() DESC, price ASC, id DESC

Extended mode is automatically switched on when you explicitly provide sorting rules by adding clause 'ORDER BY'.

In the clause you can use:

  • any combination of up to 5 columns, each followed by 'asc' or 'desc'. Functions and expressions are NOT allowed, except for function weight(), which returns relevance (rank, or weight) of the match.
  • random(). You can just specify 'order by random()', nothing else allowed (eg. order by id asc, random() is NOT allowed.

HTTP

Sorting by attributes

Query results can be sorted by one or more attributes. Example:

{
  "index":"test",
  "query":
  {
    "match": { "title": "what was" }
  },
  "sort": [ "_score", "id" ]
}

"sort" specifies an array of attributes and/or additional properties. Each element of the array can be an attribute name or "_score" if you want to sort by match weights. In that case sort order defaults to ascending for attributes and descending for _score.

You can also specify sort order explicitly. Example:

"sort":
[
  { "price":"asc" },
  "id"
]
  • asc: sort in ascending order
  • desc: sort in descending order

You can also use another syntax and specify sort order via the order property:

"sort":
[
  { "gid": { "order":"desc" } }
]

Sorting by MVA attributes is also supported in JSON queries. Sorting mode can be set via the mode property. The following modes are supported:

  • min: sort by minimum value
  • max: sort by maximum value

Example:

"sort":
[
  { "attr_mva": { "order":"desc", "mode":"max" } }
]

When sorting on an attribute, match weight (score) calculation is disabled by default (no ranker is used). You can enable weight calculation by setting the track_scores property to true:

{
  "index":"test",
  "track_scores":true,
  "query": { "match": { "title": "what was" } },
  "sort": [ { "gid": { "order":"desc" } } ]
}

Ranking overview

Ranking (also known as weighting) of search results can be defined as a process of computing a so-called relevance (weight) for every given matched document with regards to a given query that matched it. So relevance is in the end just a number attached to every document that estimates how relevant the document is to the query. Search results can then be sorted based on this number and/or some additional parameters, so that the most sought after results would come up higher on the results page.

There is no single standard one-size-fits-all way to rank any document in any scenario. Moreover, there can not ever be such a way, because relevance is subjective. As in, what seems relevant to you might not seem relevant to me. Hence, in general case it's not just hard to compute, it's theoretically impossible.

So ranking in Manticore is configurable. It has a notion of a so-called ranker. A ranker can formally be defined as a function that takes document and query as its input and produces a relevance value as output. In layman's terms, a ranker controls exactly how (using which specific algorithm) will Manticore assign weights to the document.

Available built-in rankers

Manticore ships with a number of built-in rankers suited for different purposes. A number of them uses two factors, phrase proximity (aka LCS) and BM25. Phrase proximity works on the keyword positions, while BM25 works on the keyword frequencies. Basically, the better the degree of the phrase match between the document body and the query, the higher is the phrase proximity (it maxes out when the document contains the entire query as a verbatim quote). And BM25 is higher when the document contains more rare words. We'll save the detailed discussion for later.

Currently implemented rankers are:

  • proximity_bm25, the default ranking mode that uses and combines both phrase proximity and BM25 ranking.
  • rank_bm25, statistical ranking mode which uses BM25 ranking only (similar to most other full-text engines). This mode is faster but may result in worse quality on queries which contain more than 1 keyword.
  • none, no ranking mode. This mode is obviously the fastest. A weight of 1 is assigned to all matches. This is sometimes called boolean searching that just matches the documents but does not rank them.
  • wordcount, ranking by the keyword occurrences count. This ranker computes the per-field keyword occurrence counts, then multiplies them by field weights, and sums the resulting values.
  • proximity returns raw phrase proximity value as a result. This mode is internally used to emulate SPH_MATCH_ALL queries.
  • matchany returns rank as it was computed in SPH_MATCH_ANY mode earlier, and is internally used to emulate SPH_MATCH_ANY queries.
  • fieldmask returns a 32-bit mask with N-th bit corresponding to N-th fulltext field, numbering from 0. The bit will only be set when the respective field has any keyword occurrences satisfying the query.
  • sph04 is generally based on the default 'proximity_bm25' ranker, but additionally boosts the matches when they occur in the very beginning or the very end of a text field. Thus, if a field equals the exact query, sph04 should rank it higher than a field that contains the exact query but is not equal to it. (For instance, when the query is "Hyde Park", a document entitled "Hyde Park" should be ranked higher than a one entitled "Hyde Park, London" or "The Hyde Park Cafe".)
  • expr lets you specify the ranking formula in run time. It exposes a number of internal text factors and lets you define how the final weight should be computed from those factors. You can find more details about its syntax and a reference available factors in a subsection below.

Ranker name is case insensitive. Example:

SELECT ... OPTION ranker=sph04;

Quick summary of the ranking factors

Name Level Type Summary
max_lcs query int maximum possible LCS value for the current query
bm25 document int quick estimate of BM25(1.2, 0) without syntax support
bm25a(k1, b) document int precise BM25() value with configurable K1, B constants and syntax support
bm25f(k1, b, {field=weight, ...}) document int precise BM25F() value with extra configurable field weights
field_mask document int bit mask of matched fields
query_word_count document int number of unique inclusive keywords in a query
doc_word_count document int number of unique keywords matched in the document
lcs field int Longest Common Subsequence between query and document, in words
user_weight field int user field weight
hit_count field int total number of keyword occurrences
word_count field int number of unique matched keywords
tf_idf field float sum(tf*idf) over matched keywords == sum(idf) over occurrences
min_hit_pos field int first matched occurrence position, in words, 1-based
min_best_span_pos field int first maximum LCS span position, in words, 1-based
exact_hit field bool whether query == field
min_idf field float min(idf) over matched keywords
max_idf field float max(idf) over matched keywords
sum_idf field float sum(idf) over matched keywords
exact_order field bool whether all query keywords were a) matched and b) in query order
min_gaps field int minimum number of gaps between the matched keywords over the matching spans
lccs field int Longest Common Contiguous Subsequence between query and document, in words
wlccs field float Weighted Longest Common Contiguous Subsequence, sum(idf) over contiguous keyword spans
atc field float Aggregate Term Closeness, log(1+sum(idf1*idf2*pow(distance, -1.75)) over the best pairs of keywords

Document-level ranking factors

A document-level factor is a numeric value computed by the ranking engine for every matched document with regards to the current query. So it differs from a plain document attribute in that the attribute do not depend on the full text query, while factors might. Those factors can be used anywhere in the ranking expression. Currently implemented document-level factors are:

  • bm25 (integer), a document-level BM25 estimate (computed without keyword occurrence filtering).
  • max_lcs (integer), a query-level maximum possible value that the sum(lcs*user_weight) expression can ever take. This can be useful for weight boost scaling. For instance, MATCHANY ranker formula uses this to guarantee that a full phrase match in any field ranks higher than any combination of partial matches in all fields.
  • field_mask (integer), a document-level 32-bit mask of matched fields.
  • query_word_count (integer), the number of unique keywords in a query, adjusted for a number of excluded keywords. For instance, both (one one one one) and (one !two) queries should assign a value of 1 to this factor, because there is just one unique non-excluded keyword.
  • doc_word_count (integer), the number of unique keywords matched in the entire document.

Field-level ranking factors

A field-level factor is a numeric value computed by the ranking engine for every matched in-document text field with regards to the current query. As more than one field can be matched by a query, but the final weight needs to be a single integer value, these values need to be folded into a single one. To achieve that, field-level factors can only be used within a field aggregation function, they can not be used anywhere in the expression. For example, you can not use (lcs+bm25) as your ranking expression, as lcs takes multiple values (one in every matched field). You should use (sum(lcs)+bm25) instead, that expression sums lcs over all matching fields, and then adds bm25 to that per-field sum. Currently implemented field-level factors are:

  • lcs (integer), the length of a maximum verbatim match between the document and the query, counted in words. LCS stands for Longest Common Subsequence (or Subset). Takes a minimum value of 1 when only stray keywords were matched in a field, and a maximum value of query keywords count when the entire query was matched in a field verbatim (in the exact query keywords order). For example, if the query is 'hello world' and the field contains these two words quoted from the query (that is, adjacent to each other, and exactly in the query order), lcs will be 2. For example, if the query is 'hello world program' and the field contains 'hello world', lcs will be 2. Note that any subset of the query keyword works, not just a subset of adjacent keywords. For example, if the query is 'hello world program' and the field contains 'hello (test program)', lcs will be 2 just as well, because both 'hello' and 'program' matched in the same respective positions as they were in the query. Finally, if the query is 'hello world program' and the field contains 'hello world program', lcs will be 3. (Hopefully that is unsurprising at this point.)

  • user_weight (integer), the user specified per-field weight (refer to OPTION field_weights in SQL). The weights default to 1 if not specified explicitly.

  • hit_count (integer), the number of keyword occurrences that matched in the field. Note that a single keyword may occur multiple times. For example, if 'hello' occurs 3 times in a field and 'world' occurs 5 times, hit_count will be 8.

  • word_count (integer), the number of unique keywords matched in the field. For example, if 'hello' and 'world' occur anywhere in a field, word_count will be 2, irregardless of how many times do both keywords occur.

  • tf_idf (float), the sum of TF/IDF over all the keywords matched in the field. IDF is the Inverse Document Frequency, a floating point value between 0 and 1 that describes how frequent is the keywords (basically, 0 for a keyword that occurs in every document indexed, and 1 for a unique keyword that occurs in just a single document). TF is the Term Frequency, the number of matched keyword occurrences in the field. As a side note, tf_idf is actually computed by summing IDF over all matched occurrences. That's by construction equivalent to summing TF*IDF over all matched keywords.

  • min_hit_pos (integer), the position of the first matched keyword occurrence, counted in words. Indexing begins from position 1.

  • min_best_span_pos (integer), the position of the first maximum LCS occurrences span. For example, assume that our query was 'hello world program' and 'hello world' subphrase was matched twice in the field, in positions 13 and 21. Assume that 'hello' and 'world' additionally occurred elsewhere in the field, but never next to each other and thus never as a subphrase match. In that case, min_best_span_pos will be 13. Note how for the single keyword queries min_best_span_pos will always equal min_hit_pos.

  • exact_hit (boolean), whether a query was an exact match of the entire current field. Used in the SPH04 ranker.

  • min_idf, max_idf, and sum_idf (float). These factors respectively represent the min(idf), max(idf) and sum(idf) over all keywords that were matched in the field.

  • exact_order (boolean). Whether all of the query keywords were matched in the field in the exact query order. For example, '(microsoft office)' query would yield exact_order=1 in a field with the following contents: '(We use Microsoft software in our office.)'. However, the very same query in a '(Our office is Microsoft free.)' field would yield exact_order=0.

  • min_gaps (integer), the minimum number of positional gaps between (just) the keywords matched in field. Always 0 when less than 2 keywords match; always greater or equal than 0 otherwise. For example, with a '[big wolf]' query, '[big bad wolf]' field would yield min_gaps=1; '[big bad hairy wolf]' field would yield min_gaps=2; '[the wolf was scary and big]' field would yield min_gaps=3; etc. However, a field like '[i heard a wolf howl]' would yield min_gaps=0, because only one keyword would be matching in that field, and, naturally, there would be no gaps between the matched keywords.

    Therefore, this is a rather low-level, "raw" factor that you would most likely want to adjust before actually using for ranking. Specific adjustments depend heavily on your data and the resulting formula, but here are a few ideas you can start with: (a) any min_gaps based boosts could be simply ignored when word_count<2;

    (b) non-trivial min_gaps values (i.e. when word_count>=2) could be clamped with a certain "worst case" constant while trivial values (i.e. when min_gaps=0 and word_count<2) could be replaced by that constant;

    (c) a transfer function like 1/(1+min_gaps) could be applied (so that better, smaller min_gaps values would maximize it and worse, bigger min_gaps values would fall off slowly); and so on.

  • lccs (integer). Longest Common Contiguous Subsequence. A length of the longest subphrase that is common between the query and the document, computed in keywords.

    LCCS factor is rather similar to LCS but more restrictive, in a sense. While LCS could be greater than 1 though no two query words are matched next to each other, LCCS would only get greater than 1 if there are exact, contiguous query subphrases in the document. For example, (one two three four five) query vs (one hundred three hundred five hundred) document would yield lcs=3, but lccs=1, because even though mutual dispositions of 3 keywords (one, three, five) match between the query and the document, no 2 matching positions are actually next to each other.

    Note that LCCS still does not differentiate between the frequent and rare keywords; for that, see WLCCS.

  • wlccs (float). Weighted Longest Common Contiguous Subsequence. A sum of IDFs of the keywords of the longest subphrase that is common between the query and the document.

    WLCCS is computed very similarly to LCCS, but every "suitable" keyword occurrence increases it by the keyword IDF rather than just by 1 (which is the case with LCS and LCCS). That lets us rank sequences of more rare and important keywords higher than sequences of frequent keywords, even if the latter are longer. For example, a query (Zanzibar bed and breakfast) would yield lccs=1 for a (hotels of Zanzibar) document, but lccs=3 against (London bed and breakfast), even though "Zanzibar" is actually somewhat more rare than the entire "bed and breakfast" phrase. WLCCS factor alleviates that problem by using the keyword frequencies.

  • atc (float). Aggregate Term Closeness. A proximity based measure that grows higher when the document contains more groups of more closely located and more important (rare) query keywords.

    WARNING: you should use ATC with OPTION idf='plain,tfidf_unnormalized' (see below); otherwise you may get unexpected results.

    ATC basically works as follows. For every keyword occurrence in the document, we compute the so called term closeness. For that, we examine all the other closest occurrences of all the query keywords (keyword itself included too) to the left and to the right of the subject occurrence, compute a distance dampening coefficient as k = pow(distance, -1.75) for those occurrences, and sum the dampened IDFs. Thus for every occurrence of every keyword, we get a "closeness" value that describes the "neighbors" of that occurrence. We then multiply those per-occurrence closenesses by their respective subject keyword IDF, sum them all, and finally, compute a logarithm of that sum.

    Or in other words, we process the best (closest) matched keyword pairs in the document, and compute pairwise "closenesses" as the product of their IDFs scaled by the distance coefficient:

      pair_tc = idf(pair_word1) * idf(pair_word2) * pow(pair_distance, -1.75)

    We then sum such closenesses, and compute the final, log-dampened ATC value:

      atc = log(1+sum(pair_tc))

    Note that this final dampening logarithm is exactly the reason you should use OPTION idf=plain, because without it, the expression inside the log() could be negative.

    Having closer keyword occurrences actually contributes much more to ATC than having more frequent keywords. Indeed, when the keywords are right next to each other, distance=1 and k=1; when there just one word in between them, distance=2 and k=0.297, with two words between, distance=3 and k=0.146, and so on. At the same time IDF attenuates somewhat slower. For example, in a 1 million document collection, the IDF values for keywords that match in 10, 100, and 1000 documents would be respectively 0.833, 0.667, and 0.500. So a keyword pair with two rather rare keywords that occur in just 10 documents each but with 2 other words in between would yield pair_tc = 0.101 and thus just barely outweigh a pair with a 100-doc and a 1000-doc keyword with 1 other word between them and pair_tc = 0.099. Moreover, a pair of two unique, 1-doc keywords with 3 words between them would get a pair_tc = 0.088 and lose to a pair of two 1000-doc keywords located right next to each other and yielding a pair_tc = 0.25. So, basically, while ATC does combine both keyword frequency and proximity, it is still somewhat favoring the proximity.

Ranking factor aggregation functions

A field aggregation function is a single argument function that takes an expression with field-level factors, iterates it over all the matched fields, and computes the final results. Currently implemented field aggregation functions are:

  • sum, sums the argument expression over all matched fields. For instance, sum(1) should return a number of matched fields.
  • top, returns the greatest value of the argument over all matched fields.

Formula expressions for all the built-in rankers

Most of the other rankers can actually be emulated with the expression based ranker. You just need to pass a proper expression. Such emulation is, of course, going to be slower than using the built-in, compiled ranker but still might be of interest if you want to fine-tune your ranking formula starting with one of the existing ones. Also, the formulas define the nitty gritty ranker details in a nicely readable fashion.

  • proximity_bm25 (default ranker) = sum(lcs*user_weight)*1000+bm25
  • bm25 = bm25
  • none = 1
  • wordcount = sum(hit_count*user_weight)
  • proximity = sum(lcs*user_weight)
  • matchany = sum((word_count+(lcs-1)*max_lcs)*user_weight)
  • fieldmask = field_mask
  • sph04 = sum((4*lcs+2*(min_hit_pos==1)+exact_hit)*user_weight)*1000+bm25

Configuration of IDF formula

The historically default IDF (Inverse Document Frequency) in Manticore is equivalent to OPTION idf='normalized,tfidf_normalized', and those normalizations may cause several undesired effects.

First, idf=normalized causes keyword penalization. For instance, if you search for the | something and the occurs in more than 50% of the documents, then documents with both keywords the and[something will get less weight than documents with just one keyword something. Using OPTION idf=plain avoids this.

Plain IDF varies in [0, log(N)] range, and keywords are never penalized; while the normalized IDF varies in [-log(N), log(N)] range, and too frequent keywords are penalized.

Second, idf=tfidf_normalized causes IDF drift over queries. Historically, we additionally divided IDF by query keyword count, so that the entire sum(tf*idf) over all keywords would still fit into [0,1] range. However, that means that queries word1 and word1 | nonmatchingword2 would assign different weights to the exactly same result set, because the IDFs for both word1 and nonmatchingword2 would be divided by 2. OPTION idf='tfidf_unnormalized' fixes that. Note that BM25, BM25A, BM25F() ranking factors will be scale accordingly once you disable this normalization.

IDF flags can be mixed; plain and normalized are mutually exclusive;tfidf_unnormalized and tfidf_normalized are mutually exclusive; and unspecified flags in such a mutually exclusive group take their defaults. That means that OPTION idf=plain is equivalent to a complete OPTION idf='plain,tfidf_normalized' specification.