Manticore Search is a highly distributed system and consists of all the needed components to allow you build a highly available and scalable setup of a database for search:
- distributed table allowing sharding
- Mirroring for high availability
- Load balancing for high scalability
- Replication for data safety
Manticore Search is extremely flexible in terms of how you setup your cluster, there's no limitations and it's up to you how you design it. Just learn the tools mentioned above and use them to achieve your goal.
To add another node to a cluster just start another node of Manticore and make sure it's accessible by other nodes in your cluster. Then use a distributed table to connect one node with another and replication for data safety.
Please read the article about distributed tables for general overview of distributed tables. Here we focus on using a distributed table as a basis for creating a cluster of Manticore instances.
Here we have split the data over 4 servers, each serving one of the shards. If one of the servers fails, our distributed table will still work, but we would miss the results from the failed shard.
- ini
table mydist {
type = distributed
agent = box1:9312:shard1
agent = box2:9312:shard2
agent = box3:9312:shard3
agent = box4:9312:shard4
}
Now that we've added mirrors, each shard is found on 2 servers. By default, the master (the searchd instance with the distributed table) will pick randomly one of the mirrors.
The mode used for picking mirrors can be set with ha_strategy. In addition to the default random
mode there's also ha_strategy = roundrobin
.
More interesting strategies are those based on latency-weighted probabilities. noerrors
and nodeads
: not only those take out mirrors with issues, but also monitor the response times and do balancing. If a mirror responds slower (for example due to some operations running on it), it will receive less requests. When the mirror recovers and provides better times, it will get more requests.
- ini
table mydist {
type = distributed
agent = box1:9312|box5:9312:shard1
agent = box2:9312:|box6:9312:shard2
agent = box3:9312:|box7:9312:shard3
agent = box4:9312:|box8:9312:shard4
}