Ranker plugins

Ranker plugins let you implement a custom ranker that receives all the occurrences of the keywords matched in the document, and computes a WEIGHT() value. They can be called as follows:

SELECT id, attr1 FROM test WHERE match('hello') OPTION ranker=myranker('option1=1');

The call workflow is as follows:

  1. XXX_init() gets called once per query per index, in the very beginning. A few query-wide options are passed to it through a SPH_RANKER_INIT structure, including the user options strings (in the example just above, "option1=1" is that string).
  2. XXX_update() gets called multiple times per matched document, with every matched keyword occurrence passed as its parameter, a SPH_RANKER_HIT structure. The occurrences within each document are guaranteed to be passed in the order of ascending hit->hit_pos values.
  3. XXX_finalize() gets called once per matched document, once there are no more keyword occurrences. It must return the WEIGHT() value. This is the only mandatory function.
  4. XXX_deinit() gets called once per query, in the very end.

Token filter plugins

Token filter plugins let you implement a custom tokenizer that makes tokens according to custom rules. There are two type:

In the text processing pipeline, the token filters will run after the base tokenizer processing occurs (which processes the text from field or query and creates tokens out of them).

Index-time tokenizer

Index-time tokenizer gets created by indexer on indexing source data into index or by RT index on processing INSERT or REPLACE statements.

Plugin is declared as library name:plugin name:optional string of settings. The init functions of the plugin can accept arbitrary settings that can be passed as a string in format option1=value1;option2=value2;...

Example:

index_token_filter = my_lib.so:email_process:field=email;split=.io

The call workflow for index-time token filter is as follows:

  1. XXX_init() gets called right after indexer creates token filter with empty fields list then after indexer got index schema with actual fields list. It must return zero for successful initialization or error description otherwise.
  2. XXX_begin_document gets called only for RT index INSERT/REPLACE for every document. It must return zero for successful call or error description otherwise. Using OPTION token_filter_options additional parameters/settings can be passed to the function.
    INSERT INTO rt (id, title) VALUES (1, 'some text [email protected]') OPTION token_filter_options='.io'
  3. XXX_begin_field gets called once for each field prior to processing field with base tokenizer with field number as its parameter.
  4. XXX_push_token gets called once for each new token produced by base tokenizer with source token as its parameter. It must return token, count of extra tokens made by token filter and delta position for token.
  5. XXX_get_extra_token gets called multiple times in case XXX_push_token reports extra tokens. It must return token and delta position for that extra token.
  6. XXX_end_field gets called once right after source tokens from current field get over.
  7. XXX_deinit gets called in the very end of indexing.

The following functions are mandatory to be defined: XXX_begin_document and XXX_push_token and XXX_get_extra_token.

query-time token filter

Query-time tokenizer gets created on search each time full-text invoked by every index involved.

The call workflow for query-time token filter is as follows:

  1. XXX_init() gets called once per index prior to parsing query with parameters - max token length and string set by token_filter option
    SELECT * FROM index WHERE MATCH ('test') OPTION token_filter='my_lib.so:query_email_process:io'

    It must return zero for successful initialization or error description otherwise.

  2. XXX_push_token() gets called once for each new token produced by base tokenizer with parameters: token produced by base tokenizer, pointer to raw token at source query string and raw token length. It must return token and delta position for token.
  3. XXX_pre_morph() gets called once for token right before it got passed to morphology processor with reference to token and stopword flag. It might set stopword flag to mark token as stopword.
  4. XXX_post_morph() gets called once for token after it processed by morphology processor with reference to token and stopword flag. It might set stopword flag to mark token as stopword. It must return flag non-zero value of which means to use token prior to morphology processing.
  5. XXX_deinit() gets called in the very end of query processing.

Absence of any of the functions is tolerated.

Miscellaneous tools

indextool

indextool is a helper tool used to dump miscellaneous information about a physical index. The general usage is:

indextool <command> [options]

Options effective for all commands:

  • --config <file> (-c <file> for short) overrides the built-in config file names.
  • --quiet (-q for short) keep indextool quiet - it will not output banner, etc.
  • --help (-h for short) lists all of the parameters that can be called in your particular build of indextool.
  • -v show version information of your particular build of indextool.

The commands are as follows:

  • --checkconfig just loads and verifies the config file to check if it's valid, without syntax errors.
  • --buildidf DICTFILE1 [DICTFILE2 ...] --out IDFILE build IDF file from one or several dictionary dumps. Additional parameter -skip-uniq will skip unique (df=1) words.
  • --build-infixes INDEXNAME build infixes for an existing dict=keywords index (upgrades .sph, .spi in place). You can use this option for legacy index files that already use dict=keywords, but now need to support infix searching too; updating the index files with indextool may prove easier or faster than regenerating them from scratch with indexer.
  • --dumpheader FILENAME.sph quickly dumps the provided index header file without touching any other index files or even the configuration file. The report provides a breakdown of all the index settings, in particular the entire attribute and field list.
  • --dumpconfig FILENAME.sph dumps the index definition from the given index header file in (almost) compliant sphinx.conf file format.
  • --dumpheader INDEXNAME dumps index header by index name with looking up the header path in the configuration file.
  • --dumpdict INDEXNAME dumps dictionary. Additional -stats switch will dump to dictionary the total number of documents. It is required for dictionary files that are used for creation of IDF files.
  • --dumpdocids INDEXNAME dumps document IDs by index name.
  • --dumphitlist INDEXNAME KEYWORD dumps all the hits (occurrences) of a given keyword in a given index, with keyword specified as text.
  • --dumphitlist INDEXNAME --wordid ID dumps all the hits (occurrences) of a given keyword in a given index, with keyword specified as internal numeric ID.
  • --fold INDEXNAME OPTFILE This options is useful too see how actually tokenizer proceeds input. You can feed indextool with text from file if specified or from stdin otherwise. The output will contain spaces instead of separators (accordingly to your charset_table settings) and lowercased letters in words.
  • --htmlstrip INDEXNAME filters stdin using HTML stripper settings for a given index, and prints the filtering results to stdout. Note that the settings will be taken from sphinx.conf, and not the index header.
  • --mergeidf NODE1.idf [NODE2.idf ...] --out GLOBAL.idf merge several .idf files into a single one. Additional parameter -skip-uniq will skip unique (df=1) words.
  • --morph INDEXNAME applies morphology to the given stdin and prints the result to stdout.
  • --check INDEXNAME checks the index data files for consistency errors that might be introduced either by bugs in indexer and/or hardware faults. --check also works on RT indexes, RAM and disk chunks.
  • --check-disk-chunk CHUNK_NAME checks only specific disk chunk of RT index. Argument is the numeric extension of disk chunk of RT index.
  • --strip-path strips the path names from all the file names referenced from the index (stopwords, wordforms, exceptions, etc). This is useful for checking indexes built on another machine with possibly different path layouts.
  • --rotate works only with --check and defines whether to check index waiting for rotation, i.e. with .new extension. This is useful when you want to check your index before actually using it.
  • --apply-killlists loads and applies kill-lists for all indexes listed in the config file. Changes are saved in .SPM files. Kill-list files (.SPK) are deleted. This can be useful if you want to move applying indexes from server startup to indexing stage.

spelldump

spelldump is used to extract contents of a dictionary file that uses ispell or MySpell format, which can help build word lists for wordforms - all of the possible forms are pre-built for you.

The general usage is:

spelldump [options] <dictionary> <affix> [result] [locale-name]

The two main parameters are the dictionary's main file and its affix file; usually these are named as [language-prefix].dict and [language-prefix].aff and will be available with most common Linux distributions, as well as various places online.

[result] specifies where the dictionary data should be output to, and [locale-name] additionally specifies the locale details you wish to use.

There is an additional option, -c [file], which specifies a file for case conversion details.

Examples of its usage are:

spelldump en.dict en.aff
spelldump ru.dict ru.aff ru.txt ru_RU.CP1251
spelldump ru.dict ru.aff ru.txt .1251

The results file will contain a list of all the words in the dictionary in alphabetical order, output in the format of a wordforms file, which you can use to customize for your specific circumstances. An example of the result file:

zone > zone
zoned > zoned
zoning > zoning

wordbreaker

wordbreaker is used to split compound words, as usual in URLs, into its component words. For example, this tool can split "lordoftherings" into its four component words, or http://manofsteel.warnerbros.com into "man of steel warner bros". This helps searching, without requiring prefixes or infixes: searching for "sphinx" wouldn't match "sphinxsearch" but if you break the compound word and index the separate components, you'll get a match without the costs of prefix and infix larger index files.

Examples of its usage are:

echo manofsteel | bin/wordbreaker -dict dict.txt split
man of steel

The input stream will be separated in words using the -dict dictionary file. In no dictionary specified, wordbreaker looks in the working folder for a wordbreaker-dict.txt file. (The dictionary should match the language of the compound word.) The split command breaks words from the standard input, and outputs the result in the standard output. There are also test and bench commands that let you test the splitting quality and benchmark the splitting functionality.

Wordbreaker needs a dictionary to recognize individual substrings within a string. To differentiate between different guesses, it uses the relative frequency of each word in the dictionary: higher frequency means higher split probability. You can generate such a file using the indexer tool:

indexer --buildstops dict.txt 100000 --buildfreqs myindex -c /path/to/sphinx.conf

which will write the 100,000 most frequent words, along with their counts, from myindex into dict.txt. The output file is a text file, so you can edit it by hand, if need be, to add or remove words.