zackw
Member
We recently changed VPSes due to CentOS going bye bye. In this case we ended up with AlmaLinux 8.
I re-setup everything as best I could to get the sites working again, but I know more can be done to make indexing better.
Elasticsearch would crash all the time with OOM and I setup Monit to auto-restart it, but lately it's been nuts, restarting dozens of times a day.
The server is 4vCPU and 8GB RAM. It has no swap file because apparently that's not allowed at this vendor.
I just now looked up versions of things and it's running ES 7.17.25 and openjdk 22.0.1. Default settings.
I know the default of ES is to set the heap to half the total RAM or it's 4GB. In my case it was 4GB, however it determined that. I have already updated the heap to use 3GB in a custom jvm options file, just so I can get the system working again, but it's still using (in total) around 7GB and so the cache is using the rest and ya, pretty much still maxed out RAM more or less.
That said, all services run on this box, it is a full WHM/cPanel server, and has 4 websites. One of them is XenForo and is the only one with traffic really, the other sites don't get meaningful traffic.
XF is the only one using ES with one index. It reports that I have 660,000+ documents at just over 400MB. Searches average 19 milliseconds. The Allocated memory has been reporting about 68KB all day that I've been working on this. I'm guessing that with about 500 active users, we're getting 300+ searches an hour, not really sure on that one. 1500 in the last couple hours.
-----------
Now here are my questions trying to sort this out:
1) All documentation says that ES needs 4GB or half the RAM out of the box, period. I've not seen any kind of official statements that we can use it with anything less than 4GB as a minimum. Is this true? I mean, if there is no index at all and Elasticsearch is simply installed, it still needs that much RAM just to exist? What if I'm indexing 10 documents and it's 5MB? This doesn't make sense to me.
2) Is it wrong to think that the heap size should be based more on the size of the actual indexes? My total index is less than 500MB, so why would I need 4GB RAM? What is the rest of the overhead even doing? I found one whole person on the internet that suggested I could set the heap size to about 20% of the index size. Applying the peredo calc, about 80% of searches are for 20% of the index, so don't need much more I guess.
3) Would it be wise to update ES and openjdk to latest versions? I know I have to completely uninstal ES to do this. Are there any downsides, like requiring even more RAM for the later versions?
4) What other optimizations can I do to reduce RAM needs, given I can't use a swap file? I've read about changing malloc as one option, or tweaking other Java and ES variables.
5) Am I screwed and need to upgrade the server to get even more RAM? It seems crazy that I should need more than 8GB for a forum with 655k posts and 60k members and a 400MB index.
6) What other tools could I use to manage this? It gets old and ugly trying to do everything on a command line.
I re-setup everything as best I could to get the sites working again, but I know more can be done to make indexing better.
Elasticsearch would crash all the time with OOM and I setup Monit to auto-restart it, but lately it's been nuts, restarting dozens of times a day.
The server is 4vCPU and 8GB RAM. It has no swap file because apparently that's not allowed at this vendor.
I just now looked up versions of things and it's running ES 7.17.25 and openjdk 22.0.1. Default settings.
I know the default of ES is to set the heap to half the total RAM or it's 4GB. In my case it was 4GB, however it determined that. I have already updated the heap to use 3GB in a custom jvm options file, just so I can get the system working again, but it's still using (in total) around 7GB and so the cache is using the rest and ya, pretty much still maxed out RAM more or less.
That said, all services run on this box, it is a full WHM/cPanel server, and has 4 websites. One of them is XenForo and is the only one with traffic really, the other sites don't get meaningful traffic.
XF is the only one using ES with one index. It reports that I have 660,000+ documents at just over 400MB. Searches average 19 milliseconds. The Allocated memory has been reporting about 68KB all day that I've been working on this. I'm guessing that with about 500 active users, we're getting 300+ searches an hour, not really sure on that one. 1500 in the last couple hours.
-----------
Now here are my questions trying to sort this out:
1) All documentation says that ES needs 4GB or half the RAM out of the box, period. I've not seen any kind of official statements that we can use it with anything less than 4GB as a minimum. Is this true? I mean, if there is no index at all and Elasticsearch is simply installed, it still needs that much RAM just to exist? What if I'm indexing 10 documents and it's 5MB? This doesn't make sense to me.
2) Is it wrong to think that the heap size should be based more on the size of the actual indexes? My total index is less than 500MB, so why would I need 4GB RAM? What is the rest of the overhead even doing? I found one whole person on the internet that suggested I could set the heap size to about 20% of the index size. Applying the peredo calc, about 80% of searches are for 20% of the index, so don't need much more I guess.
3) Would it be wise to update ES and openjdk to latest versions? I know I have to completely uninstal ES to do this. Are there any downsides, like requiring even more RAM for the later versions?
4) What other optimizations can I do to reduce RAM needs, given I can't use a swap file? I've read about changing malloc as one option, or tweaking other Java and ES variables.
5) Am I screwed and need to upgrade the server to get even more RAM? It seems crazy that I should need more than 8GB for a forum with 655k posts and 60k members and a 400MB index.
6) What other tools could I use to manage this? It gets old and ugly trying to do everything on a command line.