Elasticsearch v5.0 is out there.
Anyone installed?
https://www.elastic.co/guide/en/elasticsearch/reference/5.0/setup-upgrade.html
Anyone installed?
https://www.elastic.co/guide/en/elasticsearch/reference/5.0/setup-upgrade.html
64-100mb of ram per 1 million posts with SSDs works for me. Total of ~2gb for 26 million posts/conversations or so, but you want at least a few gigabytes that can be used for file caching.What is everyone using to determine heap size with 5.1.x?
64-100mb of ram per 1 million posts with SSDs works for me. Total of ~2gb for 26 million posts/conversations or so, but you want at least a few gigabytes that can be used for file caching.
Aka modern ES is vastly better than old versions, and really works well with SSDs
That sounds awesome... compared with an initial statement I read about 1Gb per million posts. Argh.64-100mb of ram per 1 million posts with SSDs works for me.
64-100mb of ram per 1 million posts with SSDs works for me. Total of ~2gb for 26 million posts/conversations or so, but you want at least a few gigabytes that can be used for file caching.
Aka modern ES is vastly better than old versions, and really works well with SSDs
Version:5.1.1
Documents:323,617 (189.9 MB)
Index Updates:951 (0.0014 seconds average)
Searches:607,416 (0.0002 seconds average)
Fetches:607,566 (0.0000 seconds average
I wouldn't go less than 256mb per Elastic Search instance, but 512mb wouldn't hurt.So 512MB is much more than needed for this?
Code:Version:5.1.1 Documents:323,617 (189.9 MB) Index Updates:951 (0.0014 seconds average) Searches:607,416 (0.0002 seconds average) Fetches:607,566 (0.0000 seconds average
How much is your suggestion @Xon?
Thank you.
I set from begining to 512MB, so I will leave it as is .I wouldn't go less than 256mb per Elastic Search instance, but 512mb wouldn't hurt.
Just check the Elastic Search logs and see if it is logging pausing for garbage collection for a long time (triggers on >0.1 seconds of garbage collection time), I've only seen that in bulk updates never during normal operation. If it keeps coming up, give it more ram.
[2016-12-24T13:53:24,295][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2016-12-24T13:53:23,392][INFO ][o.e.n.Node ] [] initializing ...
[2016-12-24T13:53:23,455][INFO ][o.e.e.NodeEnvironment ] [-QxWHO2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [50.3gb], net total_space [99.9gb], spins? [unknown], types [rootfs]
[2016-12-24T13:53:23,456][INFO ][o.e.e.NodeEnvironment ] [-QxWHO2] heap size [494.9mb], compressed ordinary object pointers [true]
[2016-12-24T13:53:23,463][INFO ][o.e.n.Node ] node name [-QxWHO2] derived from node ID [-QxWHO2jTfWe2CKpqDO7-w]; set [node.name] to override
[2016-12-24T13:53:23,465][INFO ][o.e.n.Node ] version[5.1.1], pid[3882], build[5395e21/2016-12-06T12:36:15.409Z], OS[Linux/3.10.0-514.2.2.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111/25.111-b15]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [aggs-matrix-stats]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [ingest-common]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [lang-expression]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [lang-groovy]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [lang-mustache]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [lang-painless]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [percolator]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [reindex]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [transport-netty3]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] loaded module [transport-netty4]
[2016-12-24T13:53:24,161][INFO ][o.e.p.PluginsService ] [-QxWHO2] no plugins loaded
[2016-12-24T13:53:24,314][INFO ][o.e.s.ScriptService ] [-QxWHO2] compiling script file [/etc/elasticsearch/scripts/xf-date-weighted.groovy]
[2016-12-24T13:53:25,839][INFO ][o.e.n.Node ] initialized
[2016-12-24T13:53:25,839][INFO ][o.e.n.Node ] [-QxWHO2] starting ...
[2016-12-24T13:53:26,009][INFO ][o.e.t.TransportService ] [-QxWHO2] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-12-24T13:53:29,077][INFO ][o.e.c.s.ClusterService ] [-QxWHO2] new_master {-QxWHO2}{-QxWHO2jTfWe2CKpqDO7-w}{nBDVh4oWQ7ybf1Zcoq5v7w}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2016-12-24T13:53:29,092][INFO ][o.e.h.HttpServer ] [-QxWHO2] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-12-24T13:53:29,092][INFO ][o.e.n.Node ] [-QxWHO2] started
[2016-12-24T13:53:29,256][INFO ][o.e.g.GatewayService ] [-QxWHO2] recovered [1] indices into cluster_state
[2016-12-24T13:53:29,838][INFO ][o.e.c.r.a.AllocationService] [-QxWHO2] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[elastictvor][0]] ...]).
Have you disabled all swapping? Read above post, as that is what causes ES to become unstable and thus stop.I noticed that 1-2 times in month, for unknown reason elasticsearch stop working
Adding this on the end of file /etc/elasticsearch/elasticsearch.yml?Have you disabled all swapping?
bootstrap.memory_lock: true
Just an FYI, but memory settings for ElasticSearch 5 are stored in /etc/elasticsearch/jvm.options under;
Code:-Xms -Xmx
The defaults are 2gb (or 2g)
set how much ram you will add for elasticsearch - 512 megabytes is set in example (botx Xms and Xmx must have same value)
Code:nano /etc/elasticsearch/jvm.optionsCode:-Xms512m -Xmx512m
Yer... but I only have 1.1m posts, and setting those to 256, 512, 1g, 2g, 3g and 4g, all still crash ES periodically. Technically, from above comments, that shouldn't happen.Just an FYI, but memory settings for ElasticSearch 5 are stored in /etc/elasticsearch/jvm.options under;
[2017-01-07T12:15:03,305][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers ] Expected a boolean for property [store] but got [yes]
[2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers ] Expected a boolean for property [store] but got [yes]
[2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [title]
[2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [message]
Same type of issue I was having. Cpanel suspected it was server overload, yet the server graphs don't show overloading at all on my server, yet ES5 got into a routine of stopping.
I haven't had an issue since shifting ES to Amazon. Not a single stoppage... and its near free.
Strange...
Elasticsearch stops on my server once a day in about same time every day.
Nothing in logs to show error.
I tried with rising memory for ES, but same thing every night.
I looked also in cron list to see if something interupted ES, but nothing.
This started about few weeks ago.
I do not know is it because ES 5.1.x or new xenforo enhanced, java search or something third.
Only error what I found regarded xenforo and ES in logs are this:
Code:[2017-01-07T12:15:03,305][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead [2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers ] Expected a boolean for property [store] but got [yes] [2017-01-07T12:15:56,807][WARN ][o.e.d.i.m.TypeParsers ] Expected a boolean for property [store] but got [yes] [2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [title] [2017-01-07T12:15:56,809][WARN ][o.e.d.i.m.StringFieldMapper$TypeParser] The [string] field is deprecated, please use [text] or [keyword] instead on [message]
Jan 9 03:43:14 upcloud run-parts(/etc/cron.daily)[22494]: starting maldet
Jan 9 03:44:01 upcloud CROND[23106]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:44:01 upcloud CROND[23107]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
Jan 9 03:45:01 upcloud CROND[23213]: (root) CMD (/usr/bin/chown -R nginx:nginx /home/nginx/domains)
Jan 9 03:45:01 upcloud CROND[23214]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:46:01 upcloud CROND[23288]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
Jan 9 03:46:01 upcloud CROND[23289]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:47:02 upcloud CROND[23401]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:48:01 upcloud CROND[23461]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:48:01 upcloud CROND[23462]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
Jan 9 03:48:40 upcloud run-parts(/etc/cron.daily)[24668]: finished maldet
Jan 9 03:48:40 upcloud run-parts(/etc/cron.daily)[22494]: starting man-db.cron
Jan 9 03:48:47 upcloud run-parts(/etc/cron.daily)[25101]: finished man-db.cron
Jan 9 03:48:47 upcloud run-parts(/etc/cron.daily)[22494]: starting mlocate
Jan 9 03:48:57 upcloud run-parts(/etc/cron.daily)[25375]: finished mlocate
Jan 9 03:48:57 upcloud anacron[18514]: Job `cron.daily' terminated
Jan 9 03:48:57 upcloud anacron[18514]: Normal exit (1 job run)
Jan 9 03:49:01 upcloud CROND[25426]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:50:01 upcloud CROND[25709]: (root) CMD (/usr/bin/chown -R nginx:nginx /home/nginx/domains)
Jan 9 03:50:01 upcloud CROND[25712]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)
Jan 9 03:50:01 upcloud CROND[25713]: (nixstats) CMD (bash /opt/nixstats/nixstats.sh > /dev/null 2>&1)
Jan 9 03:50:01 upcloud CROND[25710]: (root) CMD (/root/tools/pushover-script.sh >/dev/null 2>&1)
We use essential cookies to make this site work, and optional cookies to enhance your experience.