NOTE: The integration with Filebeat requires Manticore Buddy. If it doesn't work, make sure Buddy is installed.
Filebeat is a lightweight shipper for forwarding and centralizing log data. Once installed as an agent, it monitors the log files or locations you specify, collects log events, and forwards them for indexing, usually to Elasticsearch or Logstash.
Now, Manticore also supports the use of Filebeat as processing pipelines. This allows the collected and transformed data to be sent to Manticore just like to Elasticsearch. Currently, all the versions >= 7.10 are supported.
Below is a Filebeat config to work with our example dpkg log:
filebeat.inputs:
- type: filestream
id: example
paths:
- /var/log/dpkg.log
output.elasticsearch:
hosts: ["http://localhost:9308"]
index: "dpkg_log"
allow_older_versions: true
setup.ilm:
enabled: false
setup.template:
name: "dpkg_log"
pattern: "dpkg_log"
Note that Filebeat versions higher than 8.10 have the output compression feature enabled by default. That is why the compression_level: 0
option must be added to the configuration file to provide compatibility with Manticore:
filebeat.inputs:
- type: filestream
id: example
paths:
- /var/log/dpkg.log
output.elasticsearch:
hosts: ["http://localhost:9308"]
index: "dpkg_log"
allow_older_versions: true
compression_level: 0
setup.ilm:
enabled: false
setup.template:
name: "dpkg_log"
pattern: "dpkg_log"
Once you run Filebeat with this configuration, log data will be sent to Manticore and properly indexed. Here is the resulting schema of the table created by Manticore and an example of the inserted document:
mysql> DESCRIBE dpkg_log;
+------------------+--------+--------------------+
| Field | Type | Properties |
+------------------+--------+--------------------+
| id | bigint | |
| @timestamp | text | indexed stored |
| message | text | indexed stored |
| log | json | |
| input | json | |
| ecs | json | |
| host | json | |
| agent | json | |
+------------------+--------+--------------------+
mysql> SELECT * FROM dpkg_log LIMIT 1\G
*************************** 1. row ***************************
id: 7280000849080753116
@timestamp: 2023-06-16T09:27:38.792Z
message: 2023-04-12 02:06:08 status half-installed libhogweed5:amd64 3.5.1+really3.5.1-2
input: {"type":"filestream"}
ecs: {"version":"1.6.0"}
host: {"name":"logstash-db848f65f-lnlf9"}
agent: {"ephemeral_id":"587c2ebc-e7e2-4e27-b772-19c611115996","id":"2e3d985b-3610-4b8b-aa3b-2e45804edd2c","name":"logstash-db848f65f-lnlf9","type":"filebeat","version":"7.10.0","hostname":"logstash-db848f65f-lnlf9"}
log: {"offset":80,"file":{"path":"/var/log/dpkg.log"}}
NOTE: this functionality requires Manticore Buddy. If it doesn't work, make sure Buddy is installed.
Manticore Search can seamlessly consume messages from a Kafka broker, allowing for real-time data indexing and search.
To get started, you need to:
- Define the source: Specify the Kafka topic from which Manticore Search will read messages. This setup includes details like the broker’s host, port, and topic name.
- Set up the destination table: Choose a Manticore real-time table to store the incoming Kafka data.
- Create a materialized view: Set up a materialized view (
mv
) to handle data transformation and mapping from Kafka to the destination table in Manticore Search. Here, you’ll define field mappings, data transformations, and any filters or conditions for the incoming data stream.
The source
configuration allows you to define the broker
, topic list
, consumer group
, and the message structure.
Define the schema using Manticore field types like int
, float
, text
, json
, etc.
CREATE SOURCE <source name> [(column type, ...)] [source_options]
All schema keys are case-insensitive, so Products
, products
, and PrOdUcTs
are treated the same. They are all converted to lowercase.
- SQL
CREATE SOURCE kafka
(id bigint, term text, abbrev text, GlossDef json)
type='kafka'
broker_list='kafka:9092'
topic_list='my-data'
consumer_group='manticore'
num_consumers='2'
batch=50
Query OK, 2 rows affected (0.02 sec)
Option | Accepted Values | Description |
---|---|---|
type |
kafka |
Sets the source type. Currently, only kafka is supported |
broker_list |
host:port [, ...] | Specifies Kafka broker URLs |
topic_list |
string [, ...] | Lists Kafka topics to consume from |
consumer_group |
string | Defines the Kafka consumer group, defaulting to manticore . |
num_consumers |
int | Number of consumers to handle messages. |
batch |
int | Number of messages to process before moving on. Default is 100 ; processes remaining messages on timeout otherwise |
The destination table is a regular real-time table where the results of Kafka message processing are stored. This table should be defined to match the schema requirements of the incoming data and optimized for the query performance needs of your application. Read more about creating real-time tables here.
- SQL
CREATE TABLE destination_kafka
(id bigint, name text, short_name text, received_at text, size multi);
Query OK, 0 rows affected (0.02 sec)
A materialized view enables data transformation from Kafka messages. You can rename fields, apply Manticore Search functions, and perform sorting, grouping, and other data operations.
A materialized view acts as a query that moves data from the Kafka source to the destination table, letting you use Manticore Search syntax to customize these queries. Make sure that fields in the select
match those in the source.
CREATE MATERIALIZED VIEW <materialized view name>
TO <destination table name> AS
SELECT [column|function [as <new name>], ...] FROM <source name>
- SQL
CREATE MATERIALIZED VIEW view_table
TO destination_kafka AS
SELECT id, term as name, abbrev as short_name,
UTC_TIMESTAMP() as received_at, GlossDef.size as size FROM kafka
Query OK, 2 rows affected (0.02 sec)
Data is transferred from Kafka to Manticore Search in batches, which are cleared after each run. For calculations across batches, such as AVG, use caution, as these may not work as expected due to batch-by-batch processing.
Here's a mapping table based on the examples above:
Kafka | Source | Buffer | MV | Destination |
---|---|---|---|---|
id | id | id | id | id |
term | term | term | term as name | name |
unnecessary key | - | - | ||
abbrev | abbrev | abbrev | abbrev as short_name | short_name |
- | - | `UTC_TIMESTAMP()`` as received_at | received_at | |
GlossDef | GlossDef | GlossDef | GlossDef.size as size | size |
To view sources and materialized views in Manticore Search, use these commands:
SHOW SOURCES
: Lists all configured sources.SHOW MVS
: Lists all materialized views.SHOW MV view_table
: Shows detailed information on a specific materialized view.
- SQL
SHOW SOURCES
+-------+
| name |
+-------+
| kafka |
+-------+
- SQL
SHOW SOURCE kafka;
+--------+---------------------------------------------------------+
| Source | Create Table |
+--------+---------------------------------------------------------+
| kafka | CREATE SOURCE kafka |
| | (id bigint, term text, abbrev text, GlossDef json) |
| | type='kafka' |
| | broker_list='kafka:9092' |
| | topic_list='my-data' |
| | consumer_group='manticore' |
| | num_consumers='2' |
| | batch=50 |
+--------+---------------------------------------------------------+
- SQL
SHOW MVS
+------------+
| name |
+------------+
| view_table |
+------------+
- SQL
SHOW MV view_table
+------------+--------------------------------------------------------------------------------------------------------+-----------+
| View | Create Table | suspended |
+------------+--------------------------------------------------------------------------------------------------------+-----------+
| view_table | CREATE MATERIALIZED VIEW view_table TO destination_kafka AS | 0 |
| | SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size | |
| | FROM kafka | |
+------------+--------------------------------------------------------------------------------------------------------+-----------+
You can suspend data consumption by altering materialized views.
If you remove the source
without deleting the MV, it automatically suspends. After recreating the source, unsuspend the MV manually using the ALTER
command.
Currently, only materialized views can be altered. To change source
parameters, drop and recreate the source.
- SQL
ALTER MATERIALIZED VIEW view_table suspended=1
Query OK (0.02 sec)
Kafka offsets commit after each batch or when processing times out. If the process stops unexpectedly during a materialized view query, you may see duplicate entries. To avoid this, include an id
field in your schema, allowing Manticore Search to prevent duplicates in the table.
- Worker initialization: After configuring a source and materialized view, Manticore Search sets up a dedicated worker to handle data ingestion from Kafka.
- Message mapping: Messages are mapped according to the source configuration schema, transforming them into a structured format.
- Batching: Messages are grouped into batches for efficient processing. Batch size can be adjusted to suit your performance and latency needs.
- Buffering: Mapped data batches are stored in a buffer table for efficient bulk operations.
- Materialized view processing: The view logic is applied to data in the buffer table, performing any transformations or filtering.
- Data transfer: Processed data is then transferred to the destination real-time table.
- Cleanup: The buffer table is cleared after each batch, ensuring it’s ready for the next set of data.
NOTE: The integration with DBeaver requires Manticore Buddy. If it doesn't work, make sure Buddy is installed.
DBeaver is a SQL client software application and a database administration tool. For MySQL databases, it applies the JDBC application programming interface to interact with them via a JDBC driver.
Manticore allows you to use DBeaver for working with data stored in Manticore tables the same way as if it was stored in a MySQL database.
To start working with Manticore in DBeaver, follow these steps:
- Choose the
New database connection
option in DBeaver's UI - Choose
SQL
->MySQL
as DBeaver's database driver - Set the
Server host
andPort
options corresponding to the host and port of your Manticore instance (keep thedatabase
field empty) - Set
root/<empty password>
as authentication credentials
Since Manticore does not fully support MySQL, only a part of DBeaver's functionality is available when working with Manticore.
You will be able to:
- View, create, delete, and rename tables
- Add and drop table columns
- Insert, delete, and update column data
You will not be able to:
- Use database integrity check mechanisms (
MyISAM
will be set as the only storage engine available) - Use MySQL procedures, triggers, events, etc.
- Manage database users
- Set other database administration options
Some MySQL data types are not currently supported by Manticore and, therefore, cannot be used when creating a new table with DBeaver. Also, a few of the supported data types are converted to the most similar Manticore types with type precision being ignored in such conversion. Below is the list of supported MySQL data types as well as the Manticore types they are mapped to:
BIGINT UNSIGNED
=>bigint
BOOL
=>boolean
DATE
,DATETIME
,TIMESTAMP
=>timestamp
FLOAT
=>float
INT
=>int
INT UNSIGNED
,SMALLINT UNSIGNED
,TINYINT UNSIGNED
,BIT
=>uint
JSON
=>json
TEXT
,LONGTEXT
,MEDIUMTEXT
,TINYTEXT
,BLOB
,LONGBLOB
,MEDIUMBLOB
,TINYBLOB
=>text
VARCHAR
,LONG VARCHAR
,BINARY
,CHAR
,VARBINARY
,LONG VARBINARY
=>string
You can find more details about Manticore data types here.
Manticore is able to handle the DATE
, DATETIME
and TIMESTAMP
data types, however, this reqiures Manticore's Buddy enabled. Otherwise, an attempt to operate with one of these types will result in an error.
Note that the TIME
type is not supported.
-
DBeaver's
Preferences
->Connections
->Client identification
option must not be turned off or overridden. To work correctly with DBeaver, Manticore needs to distinguish its requests from others. For this, it uses client notification info sent by DBeaver in request headers. Disabling client notification will break that detection and, therefore, Manticore's correct functionality. -
When trying to update data in your table for the first time, you'll see the
No unique key
popup message and will be asked to define a custom unique key. When you get this message, perform the following steps:- Choose the
Custom Unique Key
option - Choose only the
id
column in the columns list - Press
Ok
After that, you'll be able to update your data safely.
- Choose the