Token filter plugins

Token filter plugins allow you to implement a custom tokenizer that creates tokens according to custom rules. There are two types:

In the text processing pipeline, token filters will run after the base tokenizer processing occurs (which processes the text from fields or queries and creates tokens out of them).

Index-time tokenizer

Index-time tokenizer is created by indexer when indexing source data into a table or by an RT table when processing INSERT or REPLACE statements.

Plugin is declared as library name:plugin name:optional string of settings. The init functions of the plugin can accept arbitrary settings that can be passed as a string in the format option1=value1;option2=value2;...

Example:

index_token_filter = my_lib.so:email_process:field=email;split=.io

The call workflow for index-time token filter is as follows:

  1. XXX_init() gets called right after indexer creates token filter with an empty fields list and then after indexer gets the table schema with the actual fields list. It must return zero for successful initialization or an error description otherwise.
  2. XXX_begin_document gets called only for RT table INSERT/REPLACE for every document. It must return zero for a successful call or an error description otherwise. Using OPTION token_filter_options, additional parameters/settings can be passed to the function.
    INSERT INTO rt (id, title) VALUES (1, 'some text [email protected]') OPTION token_filter_options='.io'
  3. XXX_begin_field gets called once for each field prior to processing the field with the base tokenizer, with the field number as its parameter.
  4. XXX_push_token gets called once for each new token produced by the base tokenizer, with the source token as its parameter. It must return the token, count of extra tokens made by the token filter, and delta position for the token.
  5. XXX_get_extra_token gets called multiple times in case XXX_push_token reports extra tokens. It must return the token and delta position for that extra token.
  6. XXX_end_field gets called once right after the source tokens from the current field are processed.
  7. XXX_deinit gets called at the very end of indexing.

The following functions are mandatory to be defined: XXX_begin_document, XXX_push_token, and XXX_get_extra_token.

query-time token filter

Query-time tokenizer gets created on search each time full-text is invoked by every table involved.

The call workflow for query-time token filter is as follows:

  1. XXX_init() gets called once per table prior to parsing the query with parameters - max token length and a string set by the token_filter option
    SELECT * FROM index WHERE MATCH ('test') OPTION token_filter='my_lib.so:query_email_process:io'

    It must return zero for successful initialization or error description otherwise.

  2. XXX_push_token() gets called once for each new token produced by the base tokenizer with parameters: token produced by the base tokenizer, pointer to raw token at source query string, and raw token length. It must return the token and delta position for the token.
  3. XXX_pre_morph() gets called once for the token right before it gets passed to the morphology processor with a reference to the token and stopword flag. It might set the stopword flag to mark the token as a stopword.
  4. XXX_post_morph() gets called once for the token after it is processed by the morphology processor with a reference to the token and stopword flag. It might set the stopword flag to mark the token as a stopword. It must return a flag, the non-zero value of which means to use the token prior to morphology processing.
  5. XXX_deinit() gets called at the very end of query processing.

Absence of the functions is tolerated.

Miscellaneous tools

indextool

indextool is a helpful utility that extracts various information about a physical table, excluding template or distributed tables. Here's the general syntax for utilizing indextool:

indextool <command> [options]

Options

These options are applicable to all commands:

  • --config <file> (-c <file> for short) lets you override the default configuration file names.
  • --quiet (-q for short) suppresses the output of banners and such by indextool.
  • --help (-h for short) displays all parameters available in your specific build of indextool.
  • -v displays the version information of your specific indextool build.

Commands

Here are the available commands:

  • --checkconfig loads and verifies the config file, checking its validity and for any syntax errors.
  • --buildidf DICTFILE1 [DICTFILE2 ...] --out IDFILE constructs an IDF file from one or more dictionary dumps (refer to --dumpdict). The additional parameter --skip-uniq will omit unique words (df=1).
  • --build-infixes TABLENAME generates infixes for a pre-existing dict=keywords table (updates .sph, .spi in place). Use this option for legacy table files already employing dict=keywords, but now requiring infix search support; updating the table files with indextool may be simpler or quicker than recreating them from scratch with indexer.
  • --dumpheader FILENAME.sph promptly dumps the given table header file without disturbing any other table files or even the config file. The report offers a detailed view of all the table settings, especially the complete attribute and field list.
  • --dumpconfig FILENAME.sph extracts the table definition from the specified table header file in an (almost) manticore.conf file-compliant format.
  • --dumpheader TABLENAME dumps table header by table name while searching for the header path in the config file.
  • --dumpdict TABLENAME dumps the dictionary. An extra -stats switch will add the total document count to the dictionary dump. This is necessary for dictionary files used in IDF file creation.
  • --dumpdocids TABLENAME dumps document IDs by table name.
  • --dumphitlist TABLENAME KEYWORD dumps all instances (occurrences) of a specified keyword in a given table, with the keyword defined as text.
  • --dumphitlist TABLENAME --wordid ID dumps all instances (occurrences) of a specific keyword in a given table, with the keyword represented as an internal numeric ID.
  • --docextract TBL DOCID executes a standard table check pass of the entire dictionary/docs/hits, and gathers all the words and hits associated with the requested document. Subsequently, all the words are arranged according to their fields and positions, and the result is printed, grouped by field.
  • --fold TABLENAME OPTFILE This option helps understand how the tokenizer processes input. You can supply the indextool with text from a file, if specified, or from stdin otherwise. The output will replace separators with spaces (based on your charset_table settings) and convert letters in words to lowercase.
  • --htmlstrip TABLENAME applies the HTML stripper settings for a specified table to filter stdin, and sends the filtering results to stdout. Be aware that the settings will be fetched from manticore.conf, and not from the table header.
  • --mergeidf NODE1.idf [NODE2.idf ...] --out GLOBAL.idf combines multiple .idf files into a single one. The extra parameter --skip-uniq will ignore unique words (df=1).
  • --morph TABLENAME applies morphology to the given stdin and directs the result to stdout.
  • --check TABLENAME evaluates the table data files for consistency errors that could be caused by bugs in indexer or hardware faults. --check is also functional on RT tables, RAM, and disk chunks. Additional options:
    • --check-id-dups assesses for duplicate ids
    • --check-disk-chunk CHUNK_NAME checks only a specific disk chunk of an RT table. The argument is the numeric extension of the RT table's disk chunk to be checked.
  • --strip-path removes the path names from all file names referred to from the table (stopwords, wordforms, exceptions, etc). This is helpful when verifying tables built on a different machine with possibly varying path layouts.
  • --rotate is only compatible with --check and determines whether to check the table waiting for rotation, i.e., with a .new extension. This is useful when you wish to validate your table before actually putting it into use.
  • --apply-killlists loads and applies kill-lists for all tables listed in the config file. Changes are saved in .SPM files. Kill-list files (.SPK) are removed. This can be handy if you want to shift the application of tables from server startup to indexing stage.

spelldump

The spelldump command is designed to retrieve the contents from a dictionary file that employs the ispell or MySpell format. This can be handy when you need to compile word lists for wordforms, as it generates all possible forms for you.

Here's the general syntax:

spelldump [options] <dictionary> <affix> [result] [locale-name]

The primary parameters are the main file and the affix file of the dictionary. Typically, these are named as [language-prefix].dict and [language-prefix].aff, respectively. You can find these files in most standard Linux distributions or from numerous online sources.

The [result] parameter is where the extracted dictionary data will be stored, and [locale-name] is the parameter used to specify the locale details of your choice.

There's an optional -c [file] option as well. This option allows you to specify a file for case conversion details.

Here are some usage examples:

spelldump en.dict en.aff
spelldump ru.dict ru.aff ru.txt ru_RU.CP1251
spelldump ru.dict ru.aff ru.txt .1251

The resulting file will list all the words from the dictionary, arranged alphabetically and formatted like a wordforms file. You can then modify this file as per your specific requirements. Here's a sample of what the output file might look like:

zone > zone
zoned > zoned
zoning > zoning

wordbreaker

The wordbreaker tool is designed to deconstruct compound words, a common feature in URLs, into their individual components. For instance, it can dissect "lordoftherings" into four separate words or break down http://manofsteel.warnerbros.com into "man of steel warner bros". This ability enhances search functionality by eliminating the need for prefixes or infixes. To illustrate, a search for "sphinx" wouldn't yield "sphinxsearch" in the results. However, if you apply wordbreaker to disassemble the compound word and index the detached elements, a search will be successful without the file size expansion associated with prefix or infix usage in full-text indexing.

Here are some examples of how to use wordbreaker:

echo manofsteel | bin/wordbreaker -dict dict.txt split
man of steel

The -dict dictionary file is used to separate the input stream into individual words. If no dictionary file is specified, Wordbreaker will look for a file named wordbreaker-dict.txt in the current working directory. (Ensure that the dictionary file matches the language of the compound word you're working with.) The split command breaks words from the standard input and sends the results to the standard output. The test and bench commands are also available to assess the splitting quality and measure the performance of the splitting function, respectively.

Wordbreaker uses a dictionary to identify individual substrings within a given string. To distinguish between multiple potential splits, it considers the relative frequency of each word in the dictionary. A higher frequency indicates a higher likelihood for a word split. To generate a file of this nature, you can use the indexer tool:

indexer --buildstops dict.txt 100000 --buildfreqs myindex -c /path/to/manticore.conf

which will produce a text file named dict.txt that contains the 100,000 most frequently occurring words from myindex, along with their respective counts. Since this output file is a simple text document, you have the flexibility to manually edit it whenever needed. Feel free to add or remove words as required.

OpenAPI specification

The Manticore Search API is documented using the OpenAPI specification, which can be used to generate client SDKs. The machine-readable YAML file is available at https://raw.githubusercontent.com/manticoresoftware/openapi/master/manticore.yml

You can also view the specification visualized with the online Swagger Editor here.