Morphology preprocessors can be applied to words during indexing to normalize different forms of the same word and improve segmentation. For example, an English stemmer can normalize "dogs" and "dog" to "dog", resulting in identical search results for both keywords.
Manticore has four built-in morphology preprocessors:
- Lemmatizer: reduces a word to its root or lemma. For example, "running" can be reduced to "run" and "octopi" can be reduced to "octopus". Note that some words may have multiple corresponding root forms. For example, "dove" can be either the past tense of "dive" or a noun meaning a bird, as in "A white dove flew over the cuckoo's nest." In this case, a lemmatizer can generate all the possible root forms.
- Stemmer: reduces a word to its stem by removing or replacing certain known suffixes. The resulting stem may not necessarily be a valid word. For example, the Porter English stemmer reduces "running" to "run", "business" to "busi" (not a valid word), and does not reduce "octopi" at all.
- Phonetic algorithms: replace words with phonetic codes that are the same even if the words are different but phonetically close.
- Word breaking algorithms: split text into words. Currently available only for Chinese.
morphology = morphology1[, morphology2, ...]
The morphology directive specifies a list of morphology preprocessors to apply to the words being indexed. This is an optional setting, with the default being no preprocessor applied.
Manticore comes with built-in morphological preprocessors for:
- English, Russian, and German lemmatizers
- English, Russian, Arabic, and Czech stemmers
- SoundEx and MetaPhone phonetic algorithms
- Chinese word breaking algorithm
- Snowball (libstemmer) stemmers for more than 15 other languages are also available.
Lemmatizers require dictionary .pak
files that can be downloaded from the Manticore website. The dictionaries need to be put in the directory specified by lemmatizer_base. Additionally, the lemmatizer_cache setting can be used to speed up lemmatizing by spending more RAM for an uncompressed dictionary cache.
The Chinese language segmentation can be done using ICU or Jieba. Both libraries provide more accurate segmentation than n-grams, but are slightly slower. The charset_table must include all Chinese characters, which can be done using the cont
, cjk
or chinese
character sets. When you set morphology=icu_chinese
or morphology=jieba_chinese
, the documents are first pre-processed by ICU or Jieba. Then, the tokenizer processes the result according to the charset_table, and finally, other morphology processors from the morphology
option are applied. Only those parts of the text that contain Chinese are passed to ICU/Jieba for segmentation, while the other parts can be modified by different means such as different morphologies or charset_table
.
Built-in English and Russian stemmers are faster than their libstemmer counterparts but may produce slightly different results
Soundex implementation matches that of MySQL. Metaphone implementation is based on Double Metaphone algorithm and indexes the primary code.
To use the morphology
option, specify one or multiple of the built-in options, including:
- none: do not perform any morphology processing
- lemmatize_ru - apply Russian lemmatizer and pick a single root form
- lemmatize_uk - apply Ukrainian lemmatizer and pick a single root form (install it first in Centos or Ubuntu/Debian). For correct work of the lemmatizer make sure specific Ukrainian characters are preserved in your
charset_table
since by default they are not. For that override them, like this:charset_table='non_cont,U+0406->U+0456,U+0456,U+0407->U+0457,U+0457,U+0490->U+0491,U+0491'
. Here is an interactive course about how to install and use the urkainian lemmatizer. - lemmatize_en - apply English lemmatizer and pick a single root form
- lemmatize_de - apply German lemmatizer and pick a single root form
- lemmatize_ru_all - apply Russian lemmatizer and index all possible root forms
- lemmatize_uk_all - apply Ukrainian lemmatizer and index all possible root forms. Find the installation links above and take care of the
charset_table
. - lemmatize_en_all - apply English lemmatizer and index all possible root forms
- lemmatize_de_all - apply German lemmatizer and index all possible root forms
- stem_en - apply Porter's English stemmer
- stem_ru - apply Porter's Russian stemmer
- stem_enru - apply Porter's English and Russian stemmers
- stem_cz - apply Czech stemmer
- stem_ar - apply Arabic stemmer
- soundex - replace keywords with their SOUNDEX code
- metaphone - replace keywords with their METAPHONE code
- icu_chinese - apply Chinese text segmentation using ICU
- jieba_chinese - apply Chinese text segmentation using Jieba
- libstemmer_* . Refer to the list of supported languages for details
Multiple stemmers can be specified, separated by commas. They will be applied to incoming words in the order they are listed, and the processing will stop once one of the stemmers modifies the word. Additionally, when wordforms feature is enabled, the word will be looked up in the word forms dictionary first. If there is a matching entry in the dictionary, stemmers will not be applied at all. wordforms сan be used to implement stemming exceptions.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) morphology = 'stem_en, libstemmer_sv'
morphology_skip_fields = field1[, field2, ...]
A list of fields to skip morphology preprocessing. Optional, default is empty (apply preprocessors to all fields).
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, name text, price float) morphology_skip_fields = 'name' morphology = 'stem_en'
min_stemming_len = length
Minimum word length at which to enable stemming. Optional, default is 1 (stem everything).
Stemmers are not perfect, and might sometimes produce undesired results. For instance, running "gps" keyword through Porter stemmer for English results in "gp", which is not really the intent. min_stemming_len
feature lets you suppress stemming based on the source word length, ie. to avoid stemming too short words. Keywords that are shorter than the given threshold will not be stemmed. Note that keywords that are exactly as long as specified will be stemmed. So in order to avoid stemming 3-character keywords, you should specify 4 for the value. For more finely grained control, refer to wordforms feature.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) min_stemming_len = '4' morphology = 'stem_en'
index_exact_words = {0|1}
This option allows for the indexing of original keywords along with their morphologically modified versions. However, original keywords that are remapped by the wordforms and exceptions cannot be indexed. The default value is 0, indicating that this feature is disabled by default.
This allows the use of the exact form operator in the query language. Enabling this feature will increase the full-text index size and indexing time, but will not impact search performance.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) index_exact_words = '1' morphology = 'stem_en'
jieba_hmm = {0|1}
Enable or disable HMM in the Jieba segmentation tool. Optional; the default is 1.
In Jieba, the HMM (Hidden Markov Model) option refers to an algorithm used for word segmentation. Specifically, it allows Jieba to perform Chinese word segmentation by recognizing unknown words, especially those not present in its dictionary.
Jieba primarily uses a dictionary-based method for segmenting known words, but when the HMM option is enabled, it applies a statistical model to identify probable word boundaries for words or phrases that are not in its dictionary. This is particularly useful for segmenting new or rare words, names, and slang.
In summary, the jieba_hmm
option helps improve segmentation accuracy at the expense of indexing performance. It must be used with morphology = jieba_chinese
, see Chinese, Japanese and Korean (CJK) and Thai languages.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) morphology = 'jieba_chinese' jieba_hmm = '0'
jieba_mode = {accurate|full|search}
Jieba segmentation mode. Optional; the default is accurate
.
In accurate mode, Jieba splits the sentence into the most precise words using dictionary matching. This mode focuses on precision, ensuring that the segmentation is as accurate as possible.
In full mode, Jieba tries to split the sentence into every possible word combination, aiming to include all potential words. This mode focuses on maximizing recall, meaning it identifies as many words as possible, even if some of them overlap or are less commonly used. It returns all the words found in its dictionary.
In search mode, Jieba breaks the text into both whole words and smaller parts, combining precise segmentation with extra detail by providing overlapping word fragments. This mode balances precision and recall, making it useful for search engines.
jieba_mode
should be used with morphology = jieba_chinese
. See Chinese, Japanese, Korean (CJK) and Thai languages.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) morphology = 'jieba_chinese' jieba_mode = 'full'
jieba_user_dict_path = path/to/stopwords/file
Path to the Jieba user dictionary. Optional.
Jieba, a Chinese text segmentation library, uses dictionary files to assist with word segmentation. The format of these dictionary files is as follows: each line contains a word, split into three parts separated by spaces — the word itself, word frequency, and part of speech (POS) tag. The word frequency and POS tag are optional and can be omitted. The dictionary file must be UTF-8 encoded.
Example:
创新办 3 i
云计算 5
凱特琳 nz
台中
jieba_user_dict_path
should be used with morphology = jieba_chinese
. For more details, see Chinese, Japanese, Korean (CJK), and Thai languages.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) morphology = 'jieba_chinese' jieba_user_dict_path = '/usr/local/manticore/data/user-dict.txt'
html_strip = {0|1}
This option determines whether HTML markup should be stripped from the incoming full-text data. The default value is 0, which disables stripping. To enable stripping, set the value to 1.
HTML tags and entities are considered as markup and will be processed.
HTML tags are removed, while the contents between them (e.g. everything between <p>
and </p>
) are left intact. You can choose to keep and index tag attributes (e.g. HREF attribute in an A tag or ALT in an IMG tag). Some well-known inline tags, such as A, B, I, S, U, BASEFONT, BIG, EM, FONT, IMG, LABEL, SMALL, SPAN, STRIKE, STRONG, SUB, SUP, and TT, are completely removed. All other tags are treated as block level and are replaced with whitespace. For example, the text te<strong>st</strong>
will be indexed as a single keyword 'test', while te<p>st</p>
will be indexed as two keywords 'te' and 'st'.
HTML entities are decoded and replaced with their corresponding UTF-8 characters. The stripper supports both numeric forms (e.g. ï
) and text forms (e.g. ó
or
) of entities, and supports all entities specified by the HTML4 standard.
The stripper is designed to work with properly formed HTML and XHTML, but may produce unexpected results on malformed input (such as HTML with stray <'s
or unclosed >'s
).
Please note that only the tags themselves, as well as HTML comments, are stripped. To strip the contents of the tags, including embedded scripts, see the html_remove_elements option. There are no restrictions on tag names, meaning that everything that looks like a valid tag start, end, or comment will be stripped.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) html_strip = '1'
html_index_attrs = img=alt,title; a=title;
The html_index_attrs option allows you to specify which HTML markup attributes should be indexed even though other HTML markup is stripped. The default value is empty, meaning no attributes will be indexed. The format of the option is a per-tag enumeration of indexable attributes, as demonstrated in the example above. The contents of the specified attributes will be retained and indexed, providing a way to extract additional information from your full-text data.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) html_index_attrs = 'img=alt,title; a=title;' html_strip = '1'
html_remove_elements = element1[, element2, ...]
A list of HTML elements whose contents, along with the elements themselves, will be stripped. Optional, the default is an empty string (do not strip contents of any elements).
This option allows you to remove the contents of elements, meaning everything between the opening and closing tags. It is useful for removing embedded scripts, CSS, etc. The short tag form for empty elements (e.g.
) is properly supported, and the text following such a tag will not be removed.
The value is a comma-separated list of element (tag) names, the contents of which should be removed. Tag names are case-insensitive.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) html_remove_elements = 'style, script' html_strip = '1'
index_sp = {0|1}
Controls detection and indexing of sentence and paragraph boundaries. Optional, default is 0 (no detection or indexing).
This directive enables the detection and indexing of sentence and paragraph boundaries, making it possible for the SENTENCE and PARAGRAPH operators to work. Sentence boundary detection is based on plain text analysis, and only requires setting index_sp = 1
to enable it. Paragraph detection, however, relies on HTML markup and occurs during the [HTML stripping process](../../Creating_a_table/NLP_and_tokenization/Advanced_HTML_tokenization.md#html_strip. As such, to index paragraph boundaries, both the index_sp directive and the html_strip directive must be set to 1.
The following rules are used to determine sentence boundaries:
- Question marks (?) and exclamation marks (!) always indicate a sentence boundary.
- Trailing dots (.) indicate a sentence boundary, except in the following cases:
- When followed by a letter. This is considered part of an abbreviation (e.g. "S.T.A.L.K.E.R." or "Goldman Sachs S.p.A.").
- When followed by a comma. This is considered an abbreviation followed by a comma (e.g. "Telecom Italia S.p.A., founded in 1994").
- When followed by a space and a lowercase letter. This is considered an abbreviation within a sentence (e.g. "News Corp. announced in February").
- When preceded by a space and an uppercase letter, and followed by a space. This is considered a middle initial (e.g. "John D. Doe").
Paragraph boundaries are detected at every block-level HTML tag, including: ADDRESS, BLOCKQUOTE, CAPTION, CENTER, DD, DIV, DL, DT, H1, H2, H3, H4, H5, LI, MENU, OL, P, PRE, TABLE, TBODY, TD, TFOOT, TH, THEAD, TR, and UL.
Both sentences and paragraphs increment the keyword position counter by 1.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) index_sp = '1' html_strip = '1'
index_zones = h*, th, title
A list of HTML/XML zones within a field to be indexed. The default is an empty string (no zones will be indexed).
A "zone" is defined as everything between an opening and a matching closing tag, and all spans sharing the same tag name are referred to as a "zone." For example, everything between <H1>
and </H1>
in a document field belongs to the H1 zone.
The index_zones
directive enables zone indexing, but the HTML stripper must also be enabled (by setting html_strip = 1
). The value of index_zones
should be a comma-separated list of tag names and wildcards (ending with a star) to be indexed as zones.
Zones can be nested and overlap, as long as every opening tag has a matching tag. Zones can also be used for matching with the ZONE operator, as described in the extended_query_syntax.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- CONFIG
CREATE TABLE products(title text, price float) index_zones = 'h, th, title' html_strip = '1'
Manticore allows for the creation of distributed tables, which act like regular plain or real-time tables, but are actually a collection of child tables used for searching. When a query is sent to a distributed table, it is distributed among all tables in the collection. The server then collects and processes the responses to sort and recalculate values of aggregates, if necessary.
From the client's perspective, it appears as if they are querying a single table.
Distributed tables can be composed of any combination of tables, including:
- Local storage tables (plain table and Real-Time)
- Remote tables
- A combination of local and remote tables
- Percolate tables (local, remote, or a combination)
- Single local and multiple remote tables, or any other combination
Mixing percolate and template tables with plain and real-time tables is not recommended.
A distributed table is defined as type 'distributed' in the configuration file or through the SQL clause CREATE TABLE
table foo {
type = distributed
local = bar
local = bar1, bar2
agent = 127.0.0.1:9312:baz
agent = host1|host2:tbl
agent = host1:9301:tbl1|host2:tbl2 [ha_strategy=random retry_count=10]
...
}
CREATE TABLE distributed_index type='distributed' local='local_index' agent='127.0.0.1:9312:remote_table'
The essence of a distributed table lies in its list of child tables, to which it points. There are two types of child tables in a distributed table:
-
Local tables: These are tables that are served within the same server as the distributed table. To enumerate local tables, you use the syntax
local =
. You can list several local tables using multiplelocal =
lines, or combine them into one list separated by commas. -
Remote tables: These are tables that are served anywhere outside the server. To enumerate remote tables, you use the syntax
agent =
. Each line represents one endpoint or agent. Each agent can have multiple external locations and options for how it should work. More details here. It is important to note that the server does not have any information about the type of table it is working with. This may lead to errors if, for example, you issue aCALL PQ
to a remote table 'foo' that is not a percolate table.