zackw
Member
Current versions are 2.2.2 for the plugin and 7.10.2 for Elasticsearch itself.
Normally everything runs fine with no logs at all. But starting about Feb 14 to March 9 (today) it's had over 70,200 log entries.
Most all of them are like this:
Every so often it might be this one:
And:
Whatever it is, it seems to be a cURL connect issue. But when I go in SSH I can use cURL with commands like this:
It works.
I couldn't find where to download the whole log for archiving purposes. I want to clear it out so I can start fresh but I don't want to lose it all either. I can't find where XF is storing the log or what file or where to download. I also looked through settings but couldn't find a way to be emailed/notified when new errors happen. I don't need to log in to the backend very much so sometimes errors get missed for days/weeks because there is no notification. How could I monitor that, especially if XF isn't storing the log in a physical file?
Normally everything runs fine with no logs at all. But starting about Feb 14 to March 9 (today) it's had over 70,200 log entries.
Most all of them are like this:
Code:
XFES\Elasticsearch\ConnectException: cURL error 7: Failed to connect to localhost port 9200 after 0 ms: Couldn't connect to server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) src/addons/XFES/Elasticsearch/Api.php:405
Every so often it might be this one:
Code:
XFES\Elasticsearch\ConnectException: Elasticsearch indexing error (queued): cURL error 7: Failed to connect to localhost port 9200 after 0 ms: Couldn't connect to server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) src/addons/XFES/Elasticsearch/Api.php:405
And:
Code:
XFES\Elasticsearch\ConnectException: Similar thread cache rebuild failure: cURL error 7: Failed to connect to localhost port 9200 after 0 ms: Couldn't connect to server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) src/addons/XFES/Elasticsearch/Api.php:405
Whatever it is, it seems to be a cURL connect issue. But when I go in SSH I can use cURL with commands like this:
Bash:
curl http://localhost:9200/_cluster/health?pretty
It works.
I couldn't find where to download the whole log for archiving purposes. I want to clear it out so I can start fresh but I don't want to lose it all either. I can't find where XF is storing the log or what file or where to download. I also looked through settings but couldn't find a way to be emailed/notified when new errors happen. I don't need to log in to the backend very much so sometimes errors get missed for days/weeks because there is no notification. How could I monitor that, especially if XF isn't storing the log in a physical file?