External backup

KSA

Well-known member
We are running a 1TB dedicated server which 600gb is already consumed. This is making it difficult for the periodic backup to run due to low disk space and I was wondering if there is a possibility to take a periodical backup and directly store it on a external server via API or remote connection. Is there something like out there that would allow direct communication with the server to make an auto backup let's says every 24hrs?
 
We are running a 1TB dedicated server which 600gb is already consumed. This is making it difficult for the periodic backup to run due to low disk space and I was wondering if there is a possibility to take a periodical backup and directly store it on a external server via API or remote connection. Is there something like out there that would allow direct communication with the server to make an auto backup let's says every 24hrs?
Have you looked into cloud backup solutions? We don't use one yet at work (though I think our parent is starting to pilot backing up to Azure), but a lot of backup products have cloud offerings or options now.
 
  • Like
Reactions: KSA
Move your attachment data to AWS or similar. there's a plugin to do it.

much cheaper to host it and then your backups become much smaller.
good luck restoring a 600gb file vs a 2 gb one in a DR scenario
 
I use s3cmd to sync a group of daily archive backups to a private AWS S3 bucket, but one could potentially use it to sync to a directory (that isn't zip'd or archived) to an S3. It wouldn't be an effective full-recovery solution, but wouldn't fill your drive with drive with temp files.

Is your tmp directory located where it has ample space for a single-archive backup? Maybe it just needs to be changed.

df /tmp

Does your host offer 2nd drive addons or network-attached storage that you could use? A 2nd drive could be use for /tmp and to create backups, then you could sync the archives to AWS or similar.
 
  • Like
Reactions: KSA
I've been using ZFS snapshots and then shipping the snapshots to a remote site using zrepl very effectively. This basically gives free incremental backups, compression and a flexible retention policy.

I do create weekly mysqldumb backups from a ZFS snapshot rather than daring to let it run against a live database instance.
 
  • Like
Reactions: KSA
I use s3cmd to sync a group of daily archive backups to a private AWS S3 bucket, but one could potentially use it to sync to a directory (that isn't zip'd or archived) to an S3. It wouldn't be an effective full-recovery solution, but wouldn't fill your drive with drive with temp files.

Is your tmp directory located where it has ample space for a single-archive backup? Maybe it just needs to be changed.

df /tmp

Does your host offer 2nd drive addons or network-attached storage that you could use? A 2nd drive could be use for /tmp and to create backups, then you could sync the archives to AWS or similar.

I've been using ZFS snapshots and then shipping the snapshots to a remote site using zrepl very effectively. This basically gives free incremental backups, compression and a flexible retention policy.

I do create weekly mysqldumb backups from a ZFS snapshot rather than daring to let it run against a live database instance.

The issue is that now there is no way to backup on the server and then ship it somewhere. That is why I need something that communicate with the server to initiate a backup outside of the server.
 
LOL @ mysqldumb
That was, believe it or not, a typo.

The issue is that now there is no way to backup on the server and then ship it somewhere. That is why I need something that communicate with the server to initiate a backup outside of the server.
I ship entire snapshot (zfs snapshots are atomic) as the primary form of backup, rather than a more traditional backup. I've actually had to restore from these before.

Since changing filesystems is really hard, I'ld recommend moving files/attachments off to another provider. XenForo does have support for that, but you need to be careful that it doesn't ship the entire contents of internal_data off as the code cache stuff needs to be local. This is actually something of a design flaw.
 
  • Like
Reactions: fly
Top Bottom