Elasticsearch 5

What is everyone using to determine heap size with 5.1.x?
64-100mb of ram per 1 million posts with SSDs works for me. Total of ~2gb for 26 million posts/conversations or so, but you want at least a few gigabytes that can be used for file caching.

Aka modern ES is vastly better than old versions, and really works well with SSDs
 
Last edited:
64-100mb of ram per 1 million posts with SSDs works for me. Total of ~2gb for 26 million posts/conversations or so, but you want at least a few gigabytes that can be used for file caching.

Aka modern ES is vastly better than old versions, and really works well with SSDs

Ok. I will set it around 7GB with 48M posts and see how things fair.
 
64-100mb of ram per 1 million posts with SSDs works for me. Total of ~2gb for 26 million posts/conversations or so, but you want at least a few gigabytes that can be used for file caching.

Aka modern ES is vastly better than old versions, and really works well with SSDs

So 512MB is much more than needed for this?
Code:
Version:5.1.1
Documents:323,617 (189.9 MB)
Index Updates:951 (0.0014 seconds average)
Searches:607,416 (0.0002 seconds average)
Fetches:607,566 (0.0000 seconds average

How much is your suggestion @Xon?
Thank you.
 
So 512MB is much more than needed for this?
Code:
Version:5.1.1
Documents:323,617 (189.9 MB)
Index Updates:951 (0.0014 seconds average)
Searches:607,416 (0.0002 seconds average)
Fetches:607,566 (0.0000 seconds average

How much is your suggestion @Xon?
Thank you.
I wouldn't go less than 256mb per Elastic Search instance, but 512mb wouldn't hurt.

Just check the Elastic Search logs and see if it is logging pausing for garbage collection for a long time (triggers on >0.1 seconds of garbage collection time), I've only seen that in bulk updates never during normal operation. If it keeps coming up, give it more ram.
 
I wouldn't go less than 256mb per Elastic Search instance, but 512mb wouldn't hurt.

Just check the Elastic Search logs and see if it is logging pausing for garbage collection for a long time (triggers on >0.1 seconds of garbage collection time), I've only seen that in bulk updates never during normal operation. If it keeps coming up, give it more ram.
I set from begining to 512MB, so I will leave it as is .
I noticed that 1-2 times in month, for unknown reason elasticsearch stop working, and I have to restart it from cli.
Could not found anything about that in my logs, only that groovy files are deprecated, but nothing related to sudden stop of elasticsearch.
This is from deprecated log:
Code:
[2016-12-24T13:53:24,295][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead

This is from normal log:
Code:
[2016-12-24T13:53:23,392][INFO ][o.e.n.Node               ] [] initializing ...
[2016-12-24T13:53:23,455][INFO ][o.e.e.NodeEnvironment    ] [-QxWHO2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [50.3gb], net total_space [99.9gb], spins? [unknown], types [rootfs]
[2016-12-24T13:53:23,456][INFO ][o.e.e.NodeEnvironment    ] [-QxWHO2] heap size [494.9mb], compressed ordinary object pointers [true]
[2016-12-24T13:53:23,463][INFO ][o.e.n.Node               ] node name [-QxWHO2] derived from node ID [-QxWHO2jTfWe2CKpqDO7-w]; set [node.name] to override
[2016-12-24T13:53:23,465][INFO ][o.e.n.Node               ] version[5.1.1], pid[3882], build[5395e21/2016-12-06T12:36:15.409Z], OS[Linux/3.10.0-514.2.2.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111/25.111-b15]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [aggs-matrix-stats]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [ingest-common]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [lang-expression]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [lang-groovy]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [lang-mustache]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [lang-painless]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [percolator]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [reindex]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [transport-netty3]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] loaded module [transport-netty4]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService     ] [-QxWHO2] no plugins loaded
[2016-12-24T13:53:24,314][INFO ][o.e.s.ScriptService      ] [-QxWHO2] compiling script file [/etc/elasticsearch/scripts/xf-date-weighted.groovy]
[2016-12-24T13:53:25,839][INFO ][o.e.n.Node               ] initialized
[2016-12-24T13:53:25,839][INFO ][o.e.n.Node               ] [-QxWHO2] starting ...
[2016-12-24T13:53:26,009][INFO ][o.e.t.TransportService   ] [-QxWHO2] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-12-24T13:53:29,077][INFO ][o.e.c.s.ClusterService   ] [-QxWHO2] new_master {-QxWHO2}{-QxWHO2jTfWe2CKpqDO7-w}{nBDVh4oWQ7ybf1Zcoq5v7w}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2016-12-24T13:53:29,092][INFO ][o.e.h.HttpServer         ] [-QxWHO2] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-12-24T13:53:29,092][INFO ][o.e.n.Node               ] [-QxWHO2] started
[2016-12-24T13:53:29,256][INFO ][o.e.g.GatewayService     ] [-QxWHO2] recovered [1] indices into cluster_state
[2016-12-24T13:53:29,838][INFO ][o.e.c.r.a.AllocationService] [-QxWHO2] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[elastictvor][0]] ...]).
 
I noticed that 1-2 times in month, for unknown reason elasticsearch stop working
Have you disabled all swapping? Read above post, as that is what causes ES to become unstable and thus stop.

Ensure you meet these config requirements: https://www.elastic.co/guide/en/elasticsearch/reference/5.0/system-config.html

I had the same issue with ES, uninstalled it for a while back to mysql default search, whilst testing ES to the side to identify the issue. Ensure you have these correctly set.
 
Just an FYI, but memory settings for ElasticSearch 5 are stored in /etc/elasticsearch/jvm.options under;
Code:
-Xms
-Xmx

The defaults are 2gb (or 2g)
 
Just an FYI, but memory settings for ElasticSearch 5 are stored in /etc/elasticsearch/jvm.options under;
Code:
-Xms
-Xmx

The defaults are 2gb (or 2g)

Yep.
Both shoud be same value

set how much ram you will add for elasticsearch - 512 megabytes is set in example (botx Xms and Xmx must have same value)
Code:
nano /etc/elasticsearch/jvm.options
Code:
-Xms512m
-Xmx512m
 
Just an FYI, but memory settings for ElasticSearch 5 are stored in /etc/elasticsearch/jvm.options under;
Yer... but I only have 1.1m posts, and setting those to 256, 512, 1g, 2g, 3g and 4g, all still crash ES periodically. Technically, from above comments, that shouldn't happen.

I'm going to try what Elastic recommend to begin with, being to put ES on its own server so nothing else interrupts it and connect to it that way. A later today project...
 
Strange...
Elasticsearch stops on my server once a day in about same time every day.
Nothing in logs to show error.
I tried with rising memory for ES, but same thing every night.
I looked also in cron list to see if something interupted ES, but nothing.

This started about few weeks ago.
I do not know is it because ES 5.1.x or new xenforo enhanced, java search or something third.

Only error what I found regarded xenforo and ES in logs are this:

Code:
[2017-01-07T12:15:03,305][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers    ] Expected a boolean for property [store] but got [yes]
[2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers    ] Expected a boolean for property [store] but got [yes]
[2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [title]
[2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [message]
 
Same type of issue I was having. Cpanel suspected it was server overload, yet the server graphs don't show overloading at all on my server, yet ES5 got into a routine of stopping.

I haven't had an issue since shifting ES to Amazon. Not a single stoppage... and its near free.
 
Same type of issue I was having. Cpanel suspected it was server overload, yet the server graphs don't show overloading at all on my server, yet ES5 got into a routine of stopping.

I haven't had an issue since shifting ES to Amazon. Not a single stoppage... and its near free.

Nice, but the newes version they have is ES v2.3

Strange...
Elasticsearch stops on my server once a day in about same time every day.
Nothing in logs to show error.
I tried with rising memory for ES, but same thing every night.
I looked also in cron list to see if something interupted ES, but nothing.

This started about few weeks ago.
I do not know is it because ES 5.1.x or new xenforo enhanced, java search or something third.

Only error what I found regarded xenforo and ES in logs are this:

Code:
[2017-01-07T12:15:03,305][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers    ] Expected a boolean for property [store] but got [yes]
[2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers    ] Expected a boolean for property [store] but got [yes]
[2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [title]
[2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [message]



Elasticsearch stoped round 3:49
Could anybody found something what makes ES to stop?

This is cron log from that time:

Code:
Jan  9 03:43:14 upcloud run-parts(/etc/cron.daily)[22494]: starting maldet
Jan  9 03:44:01 upcloud CROND[23106]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:44:01 upcloud CROND[23107]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
Jan  9 03:45:01 upcloud CROND[23213]: (root) CMD (/usr/bin/chown -R nginx:nginx /home/nginx/domains)
Jan  9 03:45:01 upcloud CROND[23214]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:46:01 upcloud CROND[23288]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
Jan  9 03:46:01 upcloud CROND[23289]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:47:02 upcloud CROND[23401]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:48:01 upcloud CROND[23461]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:48:01 upcloud CROND[23462]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
Jan  9 03:48:40 upcloud run-parts(/etc/cron.daily)[24668]: finished maldet
Jan  9 03:48:40 upcloud run-parts(/etc/cron.daily)[22494]: starting man-db.cron
Jan  9 03:48:47 upcloud run-parts(/etc/cron.daily)[25101]: finished man-db.cron
Jan  9 03:48:47 upcloud run-parts(/etc/cron.daily)[22494]: starting mlocate
Jan  9 03:48:57 upcloud run-parts(/etc/cron.daily)[25375]: finished mlocate
Jan  9 03:48:57 upcloud anacron[18514]: Job `cron.daily' terminated
Jan  9 03:48:57 upcloud anacron[18514]: Normal exit (1 job run)
Jan  9 03:49:01 upcloud CROND[25426]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:50:01 upcloud CROND[25709]: (root) CMD (/usr/bin/chown -R nginx:nginx /home/nginx/domains)
Jan  9 03:50:01 upcloud CROND[25712]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)
Jan  9 03:50:01 upcloud CROND[25713]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan  9 03:50:01 upcloud CROND[25710]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
 
Top Bottom