SneakyDave
Well-known member
At the moment, what you want to do isn't reliably possible. But I'll try to come up with something soon that will support that. Thanks.
Server Error Log
Error Info
ErrorException: Fatal Error: Maximum execution time of 120 seconds exceeded - library/SolidMean/ForumBackup/PHPSSH2/SFTP.php:128
Generated By: Unknown Account, 37 minutes ago
Stack Trace
#0 [internal function]: XenForo_Application::handleFatalError()
#1 {main}
Request State
array(3) {
["url"] => string(39) "https://www.pijanitvor.com/deferred.php"
["_GET"] => array(0) {
}
["_POST"] => array(3) {
["_xfRequestUri"] => string(20) "/threads/mravi.1346/"
["_xfNoRedirect"] => string(1) "1"
["_xfResponseType"] => string(4) "json"
}
}
If you are changing the correct file, maybe there is an Nginx timeout that needs to be changed too. I'll try to see if that is the case.
Here is phpinfo attachmentIf you are changing the correct file, maybe there is an Nginx timeout that needs to be changed too. I'll try to see if that is the case.
@Sunka Nginx uses PHP-FPM as a proxy service to execute php code and render it. It has a time out period for it by default but I forget what it is.
Edit: Just saw you said nginx is returning a 504. That's nginx and/or php timing out and not having a long enough execution time with out a doubt. Please see this link for some tricks on fixing it.
Edit 2 (some paramaters for Nginx included at the link above):
Please note that in the spoiler I've marked the changes that I recommend in bold and changed the settings to match what you had above (400s)
proxy_connect_timeout 240s;
proxy_send_timeout 240s;
proxy_read_timeout 240s;
fastcgi_send_timeout 240s;
fastcgi_read_timeout 240s;
; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
log_level = warning
pid = /var/run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/www-error.log
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
include=/usr/local/nginx/conf/phpfpmd/*.conf
[www]
user = nginx
group = nginx
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
;listen.backlog = -1
;listen = /tmp/php5-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0666
pm = dynamic
pm.max_children = 8
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 4
pm.max_requests = 200
; PHP 5.3.9 setting
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
pm.process_idle_timeout = 10s;
rlimit_files = 65536
rlimit_core = 0
; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0
; Default Value: 0
;request_slowlog_timeout = 0
slowlog = /var/log/php-fpm/www-slow.log
pm.status_path = /phpstatus
ping.path = /phpping
ping.response = pong
; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; exectute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
security.limit_extensions = .php .php3 .php4 .php5
; catch_workers_output = yes
php_admin_value[error_log] = /var/log/php-fpm/www-php.error.log
php_admin_value[disable_functions] = shell_exec
php_admin_value[memory_limit] = 256M
request_terminate_timeout is disabled in php-fpm.conf
-snip-
Did you try the the listed changed I posted in the spoiler above in your Nginx site config file? If so would you mind posting your nginx file for this site for us? If you don't want to do it publicly I can look at it in private.
For very large data sets, there's no guarantee php doesn't timeout out or crash and you end up with incomplete backups. There's work arounds you can to further fix this. But as your data set size gets even larger, you'll end up in position needing a non-php based solution for backups.
max_execution_time = 600
fastcgi_connect_timeout 120;
fastcgi_send_timeout 600;
fastcgi_read_timeout 600;
oh you timed out on ftp transfer not actual backup ? if so measure the speed of your ftp backup transfer speed as you could have slow transfers.transfer files backup (10 GB) i
oh you timed out on ftp transfer not actual backup ? if so measure the speed of your ftp backup transfer speed as you could have slow transfers.
You probably need a standalone backup script run via cron, if you're getting a lot of timeouts. They would be more stable and consistent. Let me know if you want to go that route. I had something like that on the backburner for those that can't trust the backup process running as a XenForo addon. I think it has the same features, except closing the forum.
Very happy to hear thatYou probably need a standalone backup script run via cron, if you're getting a lot of timeouts. They would be more stable and consistent. Let me know if you want to go that route. I had something like that on the backburner for those that can't trust the backup process running as a XenForo addon. I think it has the same features, except closing the forum.
- Added options to indicate how many copies of the database and code backups to keep on the remote host.
- Added options to choose which local backup to delete (database or code) to make it more flexible, especially in the situation where the DropBox extension is also in use.
- Rewrote a lot of the code to make it more consistent with the DropBox extension.
NOTE: This version, going forward, is only compatible with ForumBackup 1.2.00 and above.
Thanks for this.@Sunka if you still use the DropBox and SFTP extension, they now both have options letting you choose which local backup file to remove when the process finishes.
Replaced the scp_send function with the sftp streaming function for better results with transfers. Fixed a small bug in the event that the remote directory doesn't include a foward slash.
We use essential cookies to make this site work, and optional cookies to enhance your experience.