I have a two node elasticsearch cluster which is configured to have 5 shards and 2 replicas. After I indexed a db table with ~2M records, there are 124 segments generated under the corresponding cache. This number is too big, and I am afraid it would easily reach the hard limit of nofiles with more indices added.
Is there a way to reduce the number of segments per index? Thanks
You have control over merge policy in elasticsearch. The merge settings can be updated dynamically using Update Index Settings API. You can, for example, reduce index.merge.policy.segments_per_tier to some value below 10 and this will reduce the number of segments on each tier and, as a result, the total number of segments.
You can also force merge manually using Optimize API. For example:
$ curl -XPOST 'http://localhost:9200/your-index/_optimize?max_num_segments=1'
You can reduce the max number of segments using the following command
$ curl -XPOST 'http://localhost:9200/_forcemerge?max_num_segments=5'
in case of windows you will need to remove the inverted commas.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With