ES 2.2 Disappearing Elastic Index on a random basis

Chromaniac

Well-known member
I am running ElasticSearch on an independent droplet on DO with 1cpu and 2GB RAM. I have tried RAM allocation from 512MB and 1GB. Nothing seems to be helping. The search index basically disappears on a random basis leading to errors in the backend. Which stops when a new post is made I assume because a fresh index starts automatically at that point of time. Most of the times I only find out about index disappearing after I conduct a search and it shows practically empty result page!

Re-indexing my entire board takes around 15 minutes. So I have been managing it so far. But the problem seems to have magnified in the recent days (Beta 6). Earlier it used to crash maybe once a week. Now it is disappearing 2-3 times every day.

I was just wondering what information I can provide here (config file content etc.) when it crashes next that can help in finding a solution for this issue.

Forum itself is running on a separate droplet in the same datacenter so performance is generally great. Similar threads data is cached so that also appears fine on existing content. But a non functional search till I reindex is quite an annoyance!

PS: I understand that running beta software on live forum is not a good idea. I only started using Enhanced Search on 2.2 beta on my board so I have no data on whether it would have been stable on 2.1.x on my current configuration of the ElasticSearch droplet. Also, Andy's ES information addon shows strange data for my install after crashes.
 
Last edited:
disk space should not be an issue because it is a dedicated droplet with 20GB+ storage not being used for anything else!

Code:
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        966M     0  966M   0% /dev
tmpfs           994M     0  994M   0% /dev/shm
tmpfs           994M  105M  889M  11% /run
tmpfs           994M     0  994M   0% /sys/fs/cgroup
/dev/vda1        25G  4.8G   21G  20% /
tmpfs           199M     0  199M   0% /run/user/1001

but i would check this when it crashes next time around!
 
I was just wondering what information I can provide here (config file content etc.) when it crashes next that can help in finding a solution for this issue.

What's your ES log say prior to you re-indexing?
What's your startup config for ES, memory allowance etc.? Do you have free RAM, or is it using swap space?
 
i can see two log files sorted by date. one is named gc.log and the other is named clustername.log. which one should i post here after next crash?

es config is simple:

Code:
cluster.name: search
node.name: search
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: search

jvm memory allocation i have tried between 512mb and 1GB. setting it at 1GB does result in short duration warnings from DO that i am using more than 80% RAM but it has not really crossed 85% to the best of my knowledge.
 
Code:
[2020-09-16T00:39:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [ibfsearch] triggering scheduled [ML] maintenance tasks
[2020-09-16T00:39:00,100][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [ibfsearch] Deleting expired data
[2020-09-16T00:39:00,250][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [ibfsearch] Completed deletion of expired ML data
[2020-09-16T00:39:00,251][INFO ][o.e.x.m.MlDailyMaintenanceService] [ibfsearch] Successfully completed [ML] maintenance tasks
[2020-09-16T01:30:00,007][INFO ][o.e.x.s.SnapshotRetentionTask] [ibfsearch] starting SLM retention snapshot cleanup task
[2020-09-16T01:30:00,008][INFO ][o.e.x.s.SnapshotRetentionTask] [ibfsearch] there are no repositories to fetch, SLM retention snapshot cleanup task com$
[2020-09-16T08:38:54,135][INFO ][o.e.c.m.MetadataDeleteIndexService] [ibfsearch] [testindex/is1M1JIWSxem2RAm17T3Mw] deleting index
[2020-09-16T08:47:32,426][INFO ][o.e.c.m.MetadataCreateIndexService] [ibfsearch] [testindex] creating index, cause [auto(bulk api)], templates [], shar$
[2020-09-16T08:47:33,134][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [testindex/2DC2q4iiQCiu2Kg93ZX61Q] create_mapping [_doc]
[2020-09-16T09:40:25,315][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [testindex/2DC2q4iiQCiu2Kg93ZX61Q] update_mapping [_doc]
[2020-09-16T09:40:29,983][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [testindex/2DC2q4iiQCiu2Kg93ZX61Q] update_mapping [_doc]
[2020-09-16T14:42:22,086][INFO ][o.e.c.m.MetadataDeleteIndexService] [ibfsearch] [testindex/2DC2q4iiQCiu2Kg93ZX61Q] deleting index
[2020-09-16T14:42:22,349][INFO ][o.e.c.m.MetadataCreateIndexService] [ibfsearch] [read_me] creating index, cause [api], templates [], shards [1]/[1], m$
[2020-09-16T14:42:22,648][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [read_me/PQzy_Vj_TTO1-5dwlVkRsw] create_mapping [doc]
[2020-09-16T14:44:35,902][INFO ][o.e.c.m.MetadataCreateIndexService] [ibfsearch] [testindex] creating index, cause [auto(bulk api)], templates [], shar$
[2020-09-16T14:44:36,103][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [testindex/_nBIjtYlQ0eMHWX_MSK5RA] create_mapping [_doc]
[2020-09-16T14:52:20,509][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [testindex/_nBIjtYlQ0eMHWX_MSK5RA] update_mapping [_doc]
[2020-09-16T16:47:41,392][INFO ][o.e.c.m.MetadataMappingService] [ibfsearch] [testindex/_nBIjtYlQ0eMHWX_MSK5RA] update_mapping [_doc]
 
haha yes. it is supposed to be locked to my forum's droplet. that's how it was configured by my friend who helps me with server management. i have pinged him about this meow issue and would update if this indeed is the reason and if we are able to fix it!
 
Top Bottom