[SolidMean] ForumBackup SFTP Transfer [Deleted]

At the moment, what you want to do isn't reliably possible. But I'll try to come up with something soon that will support that. Thanks.
 
I do not know is this connected with this (part) addon or with your main forum backup addon.

Code:
Server Error Log
Error Info
ErrorException: Fatal Error: Maximum execution time of 120 seconds exceeded - library/SolidMean/ForumBackup/PHPSSH2/SFTP.php:128
Generated By: Unknown Account, 37 minutes ago
Stack Trace

#0 [internal function]: XenForo_Application::handleFatalError()
#1 {main}

Request State

array(3) {
  ["url"] => string(39) "https://www.pijanitvor.com/deferred.php"
  ["_GET"] => array(0) {
  }
  ["_POST"] => array(3) {
    ["_xfRequestUri"] => string(20) "/threads/mravi.1346/"
    ["_xfNoRedirect"] => string(1) "1"
    ["_xfResponseType"] => string(4) "json"
  }
}

But I can see that files backup exist, and they are transfered to backup server but sizes are not the same (9,75GB vs 9,82GB)
Also, settings in php ini is max_execution_time = 240 so I do not now why error here show Maximum execution time of 120 seconds exceeded

Edit: I try to set time to 400 and run cron manually and after exactly 180 sec, it shows me Nginx 504 Gateway Time page. So somewhere is another variable for maximum execution time to 180 second which I need to change. But where?
 
Last edited:
It should be the PHP max_execution_time that you need to set. I suspect that the PHP config file that you are changing is not the one Nginx is using possibly.

If you create a phpinfo() file, and browse to that location, it should tell you which config file to change.

If you are changing the correct file, maybe there is an Nginx timeout that needs to be changed too. I'll try to see if that is the case.
 
If you are changing the correct file, maybe there is an Nginx timeout that needs to be changed too. I'll try to see if that is the case.

@Sunka Nginx uses PHP-FPM as a proxy service to execute php code and render it. It has a time out period for it by default but I forget what it is.

Edit: Just saw you said nginx is returning a 504. That's nginx and/or php timing out and not having a long enough execution time with out a doubt. Please see this link for some tricks on fixing it.

Edit 2 (some paramaters for Nginx included at the link above):
Please note that in the spoiler I've marked the changes that I recommend in bold and changed the settings to match what you had above (400s)
proxy_connect_timeout 240s;
proxy_send_timeout 240s;
proxy_read_timeout 240s;
fastcgi_send_timeout 240s;
fastcgi_read_timeout 240s;
 
Last edited:
If you are changing the correct file, maybe there is an Nginx timeout that needs to be changed too. I'll try to see if that is the case.
Here is phpinfo attachment


@Sunka Nginx uses PHP-FPM as a proxy service to execute php code and render it. It has a time out period for it by default but I forget what it is.

Edit: Just saw you said nginx is returning a 504. That's nginx and/or php timing out and not having a long enough execution time with out a doubt. Please see this link for some tricks on fixing it.

Edit 2 (some paramaters for Nginx included at the link above):
Please note that in the spoiler I've marked the changes that I recommend in bold and changed the settings to match what you had above (400s)
proxy_connect_timeout 240s;
proxy_send_timeout 240s;
proxy_read_timeout 240s;
fastcgi_send_timeout 240s;
fastcgi_read_timeout 240s;

request_terminate_timeout is disabled in php-fpm.conf

php-fpm.conf
Code:
; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
log_level = warning
pid = /var/run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/www-error.log
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
include=/usr/local/nginx/conf/phpfpmd/*.conf

[www]
user = nginx
group = nginx

listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
;listen.backlog = -1

;listen = /tmp/php5-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0666

pm = dynamic
pm.max_children = 8
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 4
pm.max_requests = 200

; PHP 5.3.9 setting
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
pm.process_idle_timeout = 10s;

rlimit_files = 65536
rlimit_core = 0

; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0
; Default Value: 0
;request_slowlog_timeout = 0
slowlog = /var/log/php-fpm/www-slow.log

pm.status_path = /phpstatus
ping.path = /phpping
ping.response = pong

; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; exectute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
security.limit_extensions = .php .php3 .php4 .php5

; catch_workers_output = yes
php_admin_value[error_log] = /var/log/php-fpm/www-php.error.log
php_admin_value[disable_functions] = shell_exec
php_admin_value[memory_limit] = 256M
 

Attachments

request_terminate_timeout is disabled in php-fpm.conf
-snip-

Did you try the the listed changed I posted in the spoiler above in your Nginx site config file? If so would you mind posting your nginx file for this site for us? If you don't want to do it publicly I can look at it in private.
 
Did you try the the listed changed I posted in the spoiler above in your Nginx site config file? If so would you mind posting your nginx file for this site for us? If you don't want to do it publicly I can look at it in private.

There was allready error with first part of addon (pure pigz backup files) so only in one case evolved to sftp transfer, only partly.
I have conversation with @eva2000 & @RoldanLT on Centmin mod forum, cause I use Centmin installation script for installing nginx, mariadb ...

As eva2000 said in that thread:
For very large data sets, there's no guarantee php doesn't timeout out or crash and you end up with incomplete backups. There's work arounds you can to further fix this. But as your data set size gets even larger, you'll end up in position needing a non-php based solution for backups.

But I do not know any solution for that, so I am connected to php script for backup.

I still use @SneakyDave addon for backup files&database and for transfer only database backup from local server to dropbox. But to transfer files backup (10 GB) it is not good solution couse I get 504 error page on nginx. I resolved that with just make backup with this addon, and then from remote server start Rsnapshot


I ended up with this changes and settings:
/etc/centminmod/php.d/zzz_customphp.ini

Code:
max_execution_time = 600

/usr/local/nginx/conf/php.conf

Code:
fastcgi_connect_timeout 120;
fastcgi_send_timeout 600;
fastcgi_read_timeout 600;

I tried first with variable 300 (5 minutes), but it was not enough. Backup was end succesfully after 6 minutes and 35 seconds.
 
oh you timed out on ftp transfer not actual backup ? if so measure the speed of your ftp backup transfer speed as you could have slow transfers.

Several times error shows before ftp transfer, in middle of backup (pigzing).
I think ftp speed is OK, cause 14 minutes is enough for rsnapshot to pull 11Gb from local server to remote server.
 
You probably need a standalone backup script run via cron, if you're getting a lot of timeouts. They would be more stable and consistent. Let me know if you want to go that route. I had something like that on the backburner for those that can't trust the backup process running as a XenForo addon. I think it has the same features, except closing the forum.
 
You probably need a standalone backup script run via cron, if you're getting a lot of timeouts. They would be more stable and consistent. Let me know if you want to go that route. I had something like that on the backburner for those that can't trust the backup process running as a XenForo addon. I think it has the same features, except closing the forum.

I'd actually be interested in this as well.
 
You probably need a standalone backup script run via cron, if you're getting a lot of timeouts. They would be more stable and consistent. Let me know if you want to go that route. I had something like that on the backburner for those that can't trust the backup process running as a XenForo addon. I think it has the same features, except closing the forum.
Very happy to hear that (y)
 
@Sunka I plan to do releases for SFTP and the DropBox addon so that you can choose separately to delete the local database backup, and/or the local file system backup. I think that should resolve your first problem.

I haven't had a chance to come up with a standalone (outside of XenForo) script to do these backups, but if/when I'll do, I'll let you know.
 
SneakyDave updated [SolidMean] ForumBackup SFTP Transfer with a new update entry:

12/27/2015: Version 1.0.10 (REQUIRES ForumBackup 1.2.00 or higher)

- Added options to indicate how many copies of the database and code backups to keep on the remote host.
- Added options to choose which local backup to delete (database or code) to make it more flexible, especially in the situation where the DropBox extension is also in use.
- Rewrote a lot of the code to make it more consistent with the DropBox extension.

NOTE: This version, going forward, is only compatible with ForumBackup 1.2.00 and above.

Read the rest of this update entry...
 
@Sunka if you still use the DropBox and SFTP extension, they now both have options letting you choose which local backup file to remove when the process finishes.

Also, for anybody uses both the DropBox and SFTP extensions of this addon, and you DON'T want to keep local copies of your backups on your server, make sure to uncheck the "Delete local database backup file?" and "Delete local code backup file?" options on the SFTP options, but leave them checked on the DropBox options. If you don't, you might receive an fopen() type of error message when the DropBox code runs and realizes that there is no longer a backup to upload.
 
@Sunka if you still use the DropBox and SFTP extension, they now both have options letting you choose which local backup file to remove when the process finishes.
Thanks for this.
I am using Dropbox extension, but SFTP no, because I can not make it for 10 GB forum files backup.

No matter what settings I setup in conf file of nginx and php, it throws error sometimes in process of zipping and sometimes in process of transfering to remote server.
For alternative, I am using cron to gzip all forum folder and then rsnapshot to pull that gziped folder to remote server and rotate there to 5 days backup, and another cron to delete backup older than 5 hours on my local server.
 
They provide FTP, but not SFTP support? I don't know if I'd use a host like that, just a personal opinion.

That being said, I don't think I'll add non-secure FTP support.
 
Top Bottom