One of the greatest features of Manticore Search is the ability to combine full-text searching with geo location. For example a retailer can offer a search where user looks for a product and the result set can tell which is the closest shop that has the product in stock so user can go in store and pick it up. A travel site can provide results based on a search limited to a certain area and results to be sorted by the distance from a point ('search museums near a hotel' for example).
To perform geo searching, a document needs to contain pairs of latitude/longitude coordinates. The coordinates can be stored as float attributes. If the document has multiple locations, it may be convenient to use a json attribute to store coordinate pairs.
table myrt
{
...
rt_attr_float = lat
rt_attr_float = lon
...
}
The coordinates can be stored as degrees or radians.
To find out the distance between two points the GEODIST() function can be used. GEODIST
requires two pairs of coordinates as first four parameters.
The 5th parameter in a simplified JSON format can configure certain aspects of the function. By default, GEODIST
expects coordinates to be in radians, but in=degrees
can be added to allow using degrees at input. The coordinates for which we perform the geo distance must have same time (degrees or radians) as the ones stored in the table, otherwise results will be misleading.
The calculated distance is by default in meters, but with out
option it can be transformed to kilometers, feets or miles. Lastly, by default a calculation method called adaptive
is used. An alternative method based on haversine
algorithm is available, however this one is slower and less precise.
The result of the function - the distance - can be used in ORDER BY
clause to sort the results
SELECT *,GEODIST(40.7643929, -73.9997683, lat,lon, {in=degrees, out=miles}) AS distance FROM myindex WHERE MATCH('...') ORDER BY distance ASC, WEIGHT() DESC;
Or to limit the results to a radial area around the point:
SELECT *,GEODIST(40.7643929, -73.9997683, lat,lon, {in=degrees, out=miles}) AS distance FROM myindex WHERE MATCH('...') AND distance <1000 ORDER BY WEIGHT(), DISTANCE ASC;
Another geo search functionality is the ability to check if a location belongs to an area. A special function will construct a polygon object which is used in another function that test if a set of coordinates belongs to it or not.
For creating the polygon two functions are available:
- GEOPOLY2D() - creates a polygon that takes in account the Earth's curvature
- POLY2D() - creates a simple polygon in plain space
POLY2D
can be used for geo searches if the area has sides shorter than 500km (for 3-4 sides, for polygons with more sides lower values should be considered). For areas with longer sides usage of GEOPOLY2D
is mandatory for keeping results accurate. GEOPOLY2D
also expects coordinates as latitude/longitude pairs in degrees, using radians will provide results in plain space (like POLY2D
).
CONTAINS() expects at input a polygon and a set of coordinates and output 1
if the point is inside the polygon or 0
otherwise.
SELECT *,CONTAINS(GEOPOLY2D(40.76439, -73.9997, 42.21211, -73.999, 42.21211, -76.123, 40.76439, -76.123), 41.5445, -74.973) AS inside FROM myindex WHERE MATCH('...') AND inside=1;
Percolate queries are also known as Persistent queries, Prospective search, document routing, search in reverse and inverse search.
The normal way of doing searches is to store documents and perform search queries against them. However there are cases when we want to apply a query to an incoming new document to signal the matching. There are some scenarios where this is wanted. For example a monitoring system doesn't just collect data, but it's also desired to notify user on different events. That can be reaching some threshold for a metric or a certain value that appears in the monitored data. Another similar case is news aggregation. You can notify the user about any fresh news, but the user might want to be notified only about certain categories or topics. Going further, they might be only interested about certain "keywords".
This is where a traditional search is not a good fit, since would assume performed the desired search over the entire collection, which gets multiplied by the number of users and we end up with lots of queries running over the entire collection, which can put a lot of extra load. The idea explained in this section is to store instead the queries and test them against an incoming new document or a batch of documents.
Google Alerts, AlertHN, Bloomberg Terminal and other systems that let their users to subscribe to something use a similar technology.
- See percolate about how to create a PQ table.
- See Adding rules to a percolate table to learn how to add percolate rules (as known as PQ rules). Here let's just give a quick example.
The key thing you need to remember about percolate queries is that you already have your search queries in the table. What you need to provide is documents to check if any of them match any of the stored rules.
You can perform a percolate query via SQL or JSON interfaces as well as using programming language clients. The SQL way gives more flexibility while via the HTTP it's simpler and gives all you mostly need. The below table can help you understand the differences.
Desired behaviour | SQL | HTTP | PHP |
---|---|---|---|
Provide a single document | CALL PQ('tbl', '{doc1}') |
query.percolate.document{doc1} |
$client->pq()->search([$percolate]) |
Provide a single document (alternative) | CALL PQ('tbl', 'doc1', 0 as docs_json) |
- | |
Provide multiple documents | CALL PQ('tbl', ('doc1', 'doc2'), 0 as docs_json) |
query.percolate.documents[{doc1}, {doc2}] |
$client->pq()->search([$percolate]) |
Provide multiple documents (alternative) | CALL PQ('tbl', ('{doc1}', '{doc2}')) |
- | - |
Provide multiple documents (alternative) | CALL PQ('tbl', '[{doc1}, {doc2}]') |
- | - |
Return matching document ids | 0/1 as docs (disabled by default) | Enabled by default | Enabled by default |
Use document's own id to show in the result | 'id field' as docs_id (disabled by default) | Not available | Not available |
Consider input documents are JSON | 1 as docs_json (1 by default) | Enabled by default | Enabled by default |
Consider input documents are plain text | 0 as docs_json (1 by default) | Not available | Not available |
Sparsed distribution mode | default | default | default |
Sharded distribution mode | sharded as mode | Not available | Not available |
Return all info about matching query | 1 as query (0 by default) | Enabled by default | Enabled by default |
Skip invalid JSON | 1 as skip_bad_json (0 by default) | Not available | Not available |
Extended info in SHOW META | 1 as verbose (0 by default) | Not available | Not available |
Define the number which will be added to document ids if no docs_id fields provided (makes sense mostly in distributed PQ modes) | 1 as shift (0 by default) | Not available | Not available |
To demonstrate how it works here are few examples. Let's create a PQ table with 2 fields:
- title (text)
- color (string)
and 3 rules in it:
- Just full-text. Query:
@title bag
- Full-text and filtering. Query:
@title shoes
. Filters:color='red'
- Full-text and more complex filtering. Query:
@title shoes
. Filters:color IN('blue', 'green')
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CREATE TABLE products(title text, color string) type='pq';
INSERT INTO products(query) values('@title bag');
INSERT INTO products(query,filters) values('@title shoes', 'color=\'red\'');
INSERT INTO products(query,filters) values('@title shoes', 'color in (\'blue\', \'green\')');
select * from products;
+---------------------+--------------+------+---------------------------+
| id | query | tags | filters |
+---------------------+--------------+------+---------------------------+
| 1657852401006149635 | @title shoes | | color IN ('blue, 'green') |
| 1657852401006149636 | @title shoes | | color='red' |
| 1657852401006149637 | @title bag | | |
+---------------------+--------------+------+---------------------------+
The first document doesn't match any rules. It could match the first 2, but they require additional filters.
The second document matches one rule. Note that CALL PQ by default expects a document to be a JSON, but if you do 0 as docs_json
you can pass a plain string instead.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CALL PQ('products', 'Beautiful shoes', 0 as docs_json);
CALL PQ('products', 'What a nice bag', 0 as docs_json);
CALL PQ('products', '{"title": "What a nice bag"}');
+---------------------+
| id |
+---------------------+
| 1657852401006149637 |
+---------------------+
+---------------------+
| id |
+---------------------+
| 1657852401006149637 |
+---------------------+
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CALL PQ('products', '{"title": "What a nice bag"}', 1 as query);
+---------------------+------------+------+---------+
| id | query | tags | filters |
+---------------------+------------+------+---------+
| 1657852401006149637 | @title bag | | |
+---------------------+------------+------+---------+
Note that via CALL PQ
you can provide multiple documents different ways:
- as an array of plain document in round brackets
('doc1', 'doc2')
. This requires0 as docs_json
- as a array of JSONs in round brackets
('{doc1}' '{doc2}')
- or as a standard JSON array
'[{doc1}, {doc2}]'
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CALL PQ('products', ('nice pair of shoes', 'beautiful bag'), 1 as query, 0 as docs_json);
CALL PQ('products', ('{"title": "nice pair of shoes", "color": "red"}', '{"title": "beautiful bag"}'), 1 as query);
CALL PQ('products', '[{"title": "nice pair of shoes", "color": "blue"}, {"title": "beautiful bag"}]', 1 as query);
+---------------------+------------+------+---------+
| id | query | tags | filters |
+---------------------+------------+------+---------+
| 1657852401006149637 | @title bag | | |
+---------------------+------------+------+---------+
+---------------------+--------------+------+-------------+
| id | query | tags | filters |
+---------------------+--------------+------+-------------+
| 1657852401006149636 | @title shoes | | color='red' |
| 1657852401006149637 | @title bag | | |
+---------------------+--------------+------+-------------+
+---------------------+--------------+------+---------------------------+
| id | query | tags | filters |
+---------------------+--------------+------+---------------------------+
| 1657852401006149635 | @title shoes | | color IN ('blue, 'green') |
| 1657852401006149637 | @title bag | | |
+---------------------+--------------+------+---------------------------+
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CALL PQ('products', '[{"title": "nice pair of shoes", "color": "blue"}, {"title": "beautiful bag"}]', 1 as query, 1 as docs);
+---------------------+-----------+--------------+------+---------------------------+
| id | documents | query | tags | filters |
+---------------------+-----------+--------------+------+---------------------------+
| 1657852401006149635 | 1 | @title shoes | | color IN ('blue, 'green') |
| 1657852401006149637 | 2 | @title bag | | |
+---------------------+-----------+--------------+------+---------------------------+
By default matching document ids correspond to their relative numbers in the list you provide. But in some cases each document already has its own id. For this case there's an option 'id field name' as docs_id
for CALL PQ
.
Note that if the id cannot be found by the provided field name the PQ rule will not be shown in the results.
This option is only available for CALL PQ
via SQL.
- SQL
CALL PQ('products', '[{"id": 123, "title": "nice pair of shoes", "color": "blue"}, {"id": 456, "title": "beautiful bag"}]', 1 as query, 'id' as docs_id, 1 as docs);
+---------------------+-----------+--------------+------+---------------------------+
| id | documents | query | tags | filters |
+---------------------+-----------+--------------+------+---------------------------+
| 1657852401006149664 | 456 | @title bag | | |
| 1657852401006149666 | 123 | @title shoes | | color IN ('blue, 'green') |
+---------------------+-----------+--------------+------+---------------------------+
If you provide documents as separate JSONs there is an option for CALL PQ
to skip invalid JSONs. In the example note that in the 2nd and 3rd queries the 2nd JSON is invalid. Without 1 as skip_bad_json
the 2nd query fails, adding it in the 3rd query allows to avoid that. This option is not available via JSON over HTTP as the whole JSON query should be always valid when sent via the HTTP protocol.
- SQL
CALL PQ('products', ('{"title": "nice pair of shoes", "color": "blue"}', '{"title": "beautiful bag"}'));
CALL PQ('products', ('{"title": "nice pair of shoes", "color": "blue"}', '{"title": "beautiful bag}'));
CALL PQ('products', ('{"title": "nice pair of shoes", "color": "blue"}', '{"title": "beautiful bag}'), 1 as skip_bad_json);
+---------------------+
| id |
+---------------------+
| 1657852401006149635 |
| 1657852401006149637 |
+---------------------+
ERROR 1064 (42000): Bad JSON objects in strings: 2
+---------------------+
| id |
+---------------------+
| 1657852401006149635 |
+---------------------+
Percolate queries were made with high throughput and big data volume in mind, so there are few things how you can optimize your performance in case you are looking for lower latency and higher throughput.
There are two modes of distribution of a percolate table and how a percolate query can work against it:
- Sparsed. Default. When it is good: too many documents, PQ tables are mirrored. The batch of documents you pass will be split into parts according to the number of agents, so each of the nodes will receive and process only a part of the documents from your request. It will be beneficial when your set of documents is quite big, but the set of queries stored in the pq table is quite small. Assuming that all the hosts are mirrors Manticore will split your set of documents and distribute the chunks among the mirrors. Once the agents are done with the queries it will collect and merge all the results and return final query set as if it comes from one solid table. You can use replication to help the process.
- Sharded. When it is good: too many PQ rules, the rules are split among PQ tables. The whole documents set will be broad-casted to all tables of the distributed PQ table without any initial documents split. It is beneficial when you push relatively small set of documents, but the number of stored queries is huge. So in this case it is more appropriate to store just part of PQ rules on each node and then merge the results returned from the nodes that process one and the same set of documents against different sets of PQ rules. This mode has to be explicitly set since first of all it implies multiplication of network payload and secondly it expects tables with different PQ which replication cannot do out of the box.
Let's assume you have table pq_d2
which is defined as:
table pq_d2
{
type = distributed
agent = 127.0.0.1:6712:pq
agent = 127.0.0.1:6712:ptitle
}
Each of 'pq' and 'ptitle' contains:
- SQL
- JSON
- PHP
- Python
- javascript
- Java
SELECT * FROM pq;
+------+-------------+------+-------------------+
| id | query | tags | filters |
+------+-------------+------+-------------------+
| 1 | filter test | | gid>=10 |
| 2 | angry | | gid>=10 OR gid<=3 |
+------+-------------+------+-------------------+
2 rows in set (0.01 sec)
And you fire CALL PQ
to the distributed table with a couple of docs.
- SQL
- JSON
- PHP
- Python
- javascript
- Java
CALL PQ ('pq_d2', ('{"title":"angry test", "gid":3 }', '{"title":"filter test doc2", "gid":13}'), 1 AS docs);
+------+-----------+
| id | documents |
+------+-----------+
| 1 | 2 |
| 2 | 1 |
+------+-----------+
That was an example of the default sparsed mode. To demonstrate the sharded mode let's create a distributed PQ table consisting of 2 local PQ tables and add 2 documents to "products1" and 1 document to "products2":
create table products1(title text, color string) type='pq';
create table products2(title text, color string) type='pq';
create table products_distributed type='distributed' local='products1' local='products2';
INSERT INTO products1(query) values('@title bag');
INSERT INTO products1(query,filters) values('@title shoes', 'color=\'red\'');
INSERT INTO products2(query,filters) values('@title shoes', 'color in (\'blue\', \'green\')');
Now if you add 'sharded' as mode
to CALL PQ
it will send the documents to all the agents tables (in this case just local tables, but they can be remote to utilize external hardware). This mode is not available via the JSON interface.
- SQL
CALL PQ('products_distributed', ('{"title": "nice pair of shoes", "color": "blue"}', '{"title": "beautiful bag"}'), 'sharded' as mode, 1 as query);
+---------------------+--------------+------+---------------------------+
| id | query | tags | filters |
+---------------------+--------------+------+---------------------------+
| 1657852401006149639 | @title bag | | |
| 1657852401006149643 | @title shoes | | color IN ('blue, 'green') |
+---------------------+--------------+------+---------------------------+
Note that the syntax of agent mirrors in the configuration (when several hosts are assigned to one agent
line, separated with |
) has nothing to do with the CALL PQ
query mode, so each agent
always represents one node despite of the number of HA mirrors specified for this agent.
In some case you might want to get more details about performance a percolate query. For that purposes there is option 1 as verbose
which is available only via SQL and allows to save more performance metrics. You can see them via SHOW META
query which you can run after CALL PQ
. See SHOW META for more info.
- 1 as verbose
- 0 as verbose
CALL PQ('products', ('{"title": "nice pair of shoes", "color": "blue"}', '{"title": "beautiful bag"}'), 1 as verbose); show meta;
+---------------------+
| id |
+---------------------+
| 1657852401006149644 |
| 1657852401006149646 |
+---------------------+
+-------------------------+-----------+
| Name | Value |
+-------------------------+-----------+
| Total | 0.000 sec |
| Setup | 0.000 sec |
| Queries matched | 2 |
| Queries failed | 0 |
| Document matched | 2 |
| Total queries stored | 3 |
| Term only queries | 3 |
| Fast rejected queries | 0 |
| Time per query | 27, 10 |
| Time of matched queries | 37 |
+-------------------------+-----------+
Autocomplete (or word completion) is a feature in which an application predicts the rest of a word a user is typing. On websites it's used in search boxes, where a user starts to type a word and a dropdown with suggestions pops up so the user can select the ending from the list.
There are few ways how you can do autocomplete in Manticore:
To autocomplete a sentence you can use infixed search. You can find endings of a document's field by providing its beginning and:
- using full-text operators
*
to match anything it substitutes - and optionally
^
to start from the beginning of the field - and perhaps
""
for phrase matching - and optionally highlight the results so you don't have to fetch them in full to your application
There is an article about it in our blog and an interactive course. A quick example is:
- Let's assume you have a document:
My cat loves my dog. The cat (Felis catus) is a domestic species of small carnivorous mammal.
- Then you can use
^
,""
and*
so as the user is typing you make queries like:^"m*"
,^"my *"
,^"my c*"
,^"my ca*"
and so on - It will find the document and if you also do highlighting you will get something like:
<strong>My cat</strong> loves my dog. The cat ( ...
In some cases all you need is just autocomplete a single word or a couple of words. In this case you can use CALL KEYWORDS
.
CALL KEYWORDS
is available via the SQL interface and provides a way to check how keywords are tokenized or to retrieve the tokenized forms of particular keywords. If the table enables infixes it allows to quickly find possible endings for given keywords which makes it possible for use for autocomplete.
This is a good alternative to just general infixed search as it provides higher performance since all it needs for work is table's dictionary, not the documents themselves.
CALL KEYWORDS(text, table [, options])
CALL KEYWORDS
statement splits text into keywords. It returns tokenized and normalized forms of the keywords, and, optionally, keyword statistics. It also returns the position of each keyword in the query and all forms of tokenized keywords in case the table enables lemmatizers.
Parameter | Description |
---|---|
text | Text to break down to keywords |
table | Name of the table from which to take the text processing settings |
0/1 as stats | Show statistics of keywords, default is 0 |
0/1 as fold_wildcards | Fold wildcards, default is 0 |
0/1 as fold_lemmas | Fold morphological lemmas, default is 0 |
0/1 as fold_blended | Fold blended words, default is 0 |
N as expansion_limit | Override expansion_limit defined in the server configuration, default is 0 (use value from the configuration) |
docs/hits as sort_mode | Sort output results by either 'docs' or 'hits'. Default no sorting |
The examples show how it works if assuming the user is trying to get an autocomplete for "my cat ...". So on the application side all you need to do is to suggest the user the endings from the column "normalized" for each new word. It often makes sense to sort by hits or docs using 'hits' as sort_mode
or 'docs' as sort_mode
.
- Examples
MySQL [(none)]> CALL KEYWORDS('m*', 't', 1 as stats);
+------+-----------+------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+------------+------+------+
| 1 | m* | my | 1 | 2 |
| 1 | m* | mammal | 1 | 1 |
+------+-----------+------------+------+------+
MySQL [(none)]> CALL KEYWORDS('my*', 't', 1 as stats);
+------+-----------+------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+------------+------+------+
| 1 | my* | my | 1 | 2 |
+------+-----------+------------+------+------+
MySQL [(none)]> CALL KEYWORDS('c*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+-------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+-------------+------+------+
| 1 | c* | cat | 1 | 2 |
| 1 | c* | carnivorous | 1 | 1 |
| 1 | c* | catus | 1 | 1 |
+------+-----------+-------------+------+------+
MySQL [(none)]> CALL KEYWORDS('ca*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+-------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+-------------+------+------+
| 1 | ca* | cat | 1 | 2 |
| 1 | ca* | carnivorous | 1 | 1 |
| 1 | ca* | catus | 1 | 1 |
+------+-----------+-------------+------+------+
MySQL [(none)]> CALL KEYWORDS('cat*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+------------+------+------+
| 1 | cat* | cat | 1 | 2 |
| 1 | cat* | catus | 1 | 1 |
+------+-----------+------------+------+------+
There is a nice trick how you can improve the above algorithm - use bigram_index. When you have it enabled for the table what you get in it is not just a single word, but each pair of words standing one after another indexed as a separate token.
This allows to predict not just the current word's ending, but the next word too which is especially beneficial for the purpose of autocomplete.
- Examples
MySQL [(none)]> CALL KEYWORDS('m*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+------------+------+------+
| 1 | m* | my | 1 | 2 |
| 1 | m* | mammal | 1 | 1 |
| 1 | m* | my cat | 1 | 1 |
| 1 | m* | my dog | 1 | 1 |
+------+-----------+------------+------+------+
MySQL [(none)]> CALL KEYWORDS('my*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+------------+------+------+
| 1 | my* | my | 1 | 2 |
| 1 | my* | my cat | 1 | 1 |
| 1 | my* | my dog | 1 | 1 |
+------+-----------+------------+------+------+
MySQL [(none)]> CALL KEYWORDS('c*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+--------------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+--------------------+------+------+
| 1 | c* | cat | 1 | 2 |
| 1 | c* | carnivorous | 1 | 1 |
| 1 | c* | carnivorous mammal | 1 | 1 |
| 1 | c* | cat felis | 1 | 1 |
| 1 | c* | cat loves | 1 | 1 |
| 1 | c* | catus | 1 | 1 |
| 1 | c* | catus is | 1 | 1 |
+------+-----------+--------------------+------+------+
MySQL [(none)]> CALL KEYWORDS('ca*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+--------------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+--------------------+------+------+
| 1 | ca* | cat | 1 | 2 |
| 1 | ca* | carnivorous | 1 | 1 |
| 1 | ca* | carnivorous mammal | 1 | 1 |
| 1 | ca* | cat felis | 1 | 1 |
| 1 | ca* | cat loves | 1 | 1 |
| 1 | ca* | catus | 1 | 1 |
| 1 | ca* | catus is | 1 | 1 |
+------+-----------+--------------------+------+------+
MySQL [(none)]> CALL KEYWORDS('cat*', 't', 1 as stats, 'hits' as sort_mode);
+------+-----------+------------+------+------+
| qpos | tokenized | normalized | docs | hits |
+------+-----------+------------+------+------+
| 1 | cat* | cat | 1 | 2 |
| 1 | cat* | cat felis | 1 | 1 |
| 1 | cat* | cat loves | 1 | 1 |
| 1 | cat* | catus | 1 | 1 |
| 1 | cat* | catus is | 1 | 1 |
+------+-----------+------------+------+------+
CALL KEYWORDS
supports distributed tables so no matter how big your data set you can benefit from using it.