Deleting documents is only supported in RT mode for the following table types:
- Real-time tables
- Percolate tables
- Distributed tables that only contain RT tables as local or remote agents.
You can delete existing documents from a table based on either their ID or certain conditions.
Also, bulk deletion is available to delete multiple documents.
Deletion of documents can be accomplished via both SQL and JSON interfaces.
For SQL, the response for a successful operation will indicate the number of rows deleted.
For JSON, the json/delete
endpoint is used. The server will respond with a JSON object indicating whether the operation was successful and the number of rows deleted.
It is recommended to use table truncation instead of deletion to delete all documents from a table, as it is a much faster operation.
In this example we delete all documents that match full-text query test document
from the table named test
:
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- TypeScript
- Go
mysql> SELECT * FROM TEST;
+------+------+-------------+------+
| id | gid | mva1 | mva2 |
+------+------+-------------+------+
| 100 | 1000 | 100,201 | 100 |
| 101 | 1001 | 101,202 | 101 |
| 102 | 1002 | 102,203 | 102 |
| 103 | 1003 | 103,204 | 103 |
| 104 | 1004 | 104,204,205 | 104 |
| 105 | 1005 | 105,206 | 105 |
| 106 | 1006 | 106,207 | 106 |
| 107 | 1007 | 107,208 | 107 |
+------+------+-------------+------+
8 rows in set (0.00 sec)
mysql> DELETE FROM TEST WHERE MATCH ('test document');
Query OK, 2 rows affected (0.00 sec)
mysql> SELECT * FROM TEST;
+------+------+-------------+------+
| id | gid | mva1 | mva2 |
+------+------+-------------+------+
| 100 | 1000 | 100,201 | 100 |
| 101 | 1001 | 101,202 | 101 |
| 102 | 1002 | 102,203 | 102 |
| 103 | 1003 | 103,204 | 103 |
| 104 | 1004 | 104,204,205 | 104 |
| 105 | 1005 | 105,206 | 105 |
+------+------+-------------+------+
6 rows in set (0.00 sec)
{
"_index":"test",
"deleted":2,
}
Here - deleting a document with id
equalling 1 from the table named test
:
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- TypeScript
- Go
mysql> DELETE FROM TEST WHERE id=1;
Query OK, 1 rows affected (0.00 sec)
{
"_index": "test",
"_id": 1,
"found": true,
"result": "deleted"
}
Here, documents with id
matching values from the table named test
are deleted:
Note that the delete forms with id=N
or id IN (X,Y)
are the fastest, as they delete documents without performing a search.
Also note that the response contains only the id of the first deleted document in the corresponding _id
field.
- SQL
- JSON
- PHP
DELETE FROM TEST WHERE id IN (1,2);
Query OK, 2 rows affected (0.00 sec)
Manticore SQL allows to use complex conditions for the DELETE
statement.
For example here we are deleting documents that match full-text query test document
and have attribute mva1
with a value greater than 206 or mva1
values 100 or 103 from table named test
:
- SQL
DELETE FROM TEST WHERE MATCH ('test document') AND ( mva1>206 or mva1 in (100, 103) );
SELECT * FROM TEST;
Query OK, 4 rows affected (0.00 sec)
+------+------+-------------+------+
| id | gid | mva1 | mva2 |
+------+------+-------------+------+
| 101 | 1001 | 101,202 | 101 |
| 102 | 1002 | 102,203 | 102 |
| 104 | 1004 | 104,204,205 | 104 |
| 105 | 1005 | 105,206 | 105 |
+------+------+-------------+------+
6 rows in set (0.00 sec)
Here is an example of deleting documents in cluster cluster
's table test
. Note that we must provide the cluster name property along with table property to delete a row from a table within a replication cluster:
- SQL
- JSON
- PHP
- Python
- javascript
- Java
- C#
- TypeScript
- Go
delete from cluster:test where id=100;
Array(
[_index] => test
[_id] => 100
[found] => true
[result] => deleted
)
You can also perform multiple delete operations in a single call using the /bulk
endpoint. This endpoint only works with data that has Content-Type
set to application/x-ndjson
. The data should be formatted as newline-delimited JSON (NDJSON). Essentially, this means that each line should contain exactly one JSON statement and end with a newline \n
and, possibly, a \r
.
- JSON
- PHP
- Python
- javascript
- Java
- C#
- TypeScript
- Go
POST /bulk
{ "delete" : { "table" : "test", "id" : 1 } }
{ "delete" : { "table" : "test", "query": { "equals": { "int_data" : 20 } } } }
{
"items":
[
{
"bulk":
{
"_index":"test",
"_id":0,
"created":0,
"deleted":2,
"updated":0,
"result":"created",
"status":201
}
}
],
"errors":false
}
Manticore supports basic transactions for deleting and inserting data into real-time and percolate tables, except when attempting to write to a distributed table which includes a real-time or percolate table. Each change to a table is first saved in an internal changeset and then actually committed to the table. By default, each command is wrapped in an individual automatic transaction, making it transparent: you simply 'insert' something and can see the inserted result after it completes, without worrying about transactions. However, this behavior can be explicitly managed by starting and committing transactions manually.
Transactions are supported for the following commands:
Transactions are not supported for:
- UPDATE (which is different from REPLACE, as it performs an in-place attribute update)
- ALTER - for updating table schema
- TRUNCATE - for emptying a real-time table
- ATTACH - for attaching a plain table to a real-time table
- CREATE - for creating a table
- DROP - for deleting a table
Please note that transactions in Manticore do not aim to provide isolation. The purpose of transactions in Manticore is to allow you to accumulate multiple writes and execute them all at once upon commit, or to roll them all back if necessary. Transactions are integrated with binary log for durability and consistency.
SET AUTOCOMMIT = {0 | 1}
SET AUTOCOMMIT
controls the autocommit mode in the active session. AUTOCOMMIT
is set to 1 by default. With the default setting, you don't have to worry about transactions, as every statement that makes any changes to any table is implicitly wrapped in a separate transaction. Setting it to 0 allows you to manage transactions manually, meaning they will not be visible until you explicitly commit them.
Transactions are limited to a single real-time or percolate table and are also limited in size. They are atomic, consistent, overly isolated, and durable. Overly isolated means that the changes are not only invisible to concurrent transactions but even to the current session itself.
START TRANSACTION | BEGIN
COMMIT
ROLLBACK
The BEGIN
statement (or its START TRANSACTION
alias) forcibly commits any pending transaction, if present, and starts a new one.
The COMMIT
statement commits the current transaction, making all its changes permanent.
The ROLLBACK
statement rolls back the current transaction, canceling all its changes.
When using one of the /bulk
JSON endpoints ( bulk insert, bulk replace, bulk delete ), you can force a batch of documents to be committed by adding an empty line after them.
insert into indexrt (id, content, title, channel_id, published) values (1, 'aa', 'blabla', 1, 10);
Query OK, 1 rows affected (0.00 sec)
select * from indexrt where id=1;
+------+------------+-----------+--------+
| id | channel_id | published | title |
+------+------------+-----------+--------+
| 1 | 1 | 10 | blabla |
+------+------------+-----------+--------+
1 row in set (0.00 sec)
The inserted value is immediately visible in the following 'select' statement.
set autocommit=0;
Query OK, 0 rows affected (0.00 sec)
insert into indexrt (id, content, title, channel_id, published) values (3, 'aa', 'bb', 1, 1);
Query OK, 1 row affected (0.00 sec)
insert into indexrt (id, content, title, channel_id, published) values (4, 'aa', 'bb', 1, 1);
Query OK, 1 row affected (0.00 sec)
select * from indexrt where id=3;
Empty set (0.01 sec)
select * from indexrt where id=4;
Empty set (0.00 sec)
In this case, changes are NOT automatically committed. As a result, the insertions are not visible, even in the same session, since they have not been committed. Also, despite the absence of a BEGIN
statement, a transaction is implicitly started.
To make the changes visible, you need to commit the transaction:
commit;
Query OK, 0 rows affected (0.00 sec)
select * from indexrt where id=4;
+------+------------+-----------+-------+
| id | channel_id | published | title |
+------+------------+-----------+-------+
| 4 | 1 | 1 | bb |
+------+------------+-----------+-------+
1 row in set (0.00 sec)
select * from indexrt where id=3;
+------+------------+-----------+-------+
| id | channel_id | published | title |
+------+------------+-----------+-------+
| 3 | 1 | 1 | bb |
+------+------------+-----------+-------+
1 row in set (0.00 sec)
After the commit statement, the insertions are visible in the table.
By using BEGIN
and COMMIT
, you can define the bounds of a transaction explicitly, so there's no need to worry about autocommit in this case.
begin;
Query OK, 0 rows affected (0.00 sec)
insert into indexrt (id, content, title, channel_id, published) values (2, 'aa', 'bb', 1, 1);
Query OK, 1 row affected (0.00 sec)
select * from indexrt where id=2;
Empty set (0.01 sec)
commit;
Query OK, 0 rows affected (0.01 sec)
select * from indexrt where id=2;
+------+------------+-----------+-------+
| id | channel_id | published | title |
+------+------------+-----------+-------+
| 2 | 1 | 1 | bb |
+------+------------+-----------+-------+
1 row in set (0.01 sec)