Whats your Backup Solution?

How fast is the restore from a rdiffweb? its pretty quick?

Pretty much instant for small files/folders

For any 'serious' restoration jobs, you'd use rdiff-backup --restore-as-of manually

Was it easy to setup Dark?

Very, the command line args are almost all optional and similar to stuff like rsync

reading the documentation right now - rdiff-backup + rdiffweb seems like a solid solution. https://clientarea.ramnode.com/cart.php?gid=11 and $20 a year - so if he has months of backups for *2 of a normal tar.gz 50GB is way more then enough for me at least. Getting the flow down with the database backup still needs to happen but this direction seems like the right one for me.

Database backups are one of the caveats, you ideally need to store the dumps as uncompressed .sql or the backup size/time snowballs
 
Any tip on how to do that? I have a DiskStation DS213 and would like to backup my server into it.
I have to clear up something: I have that NAS at home but we also use a QNAP rackserver NAS as a fileserver (our forum is pretty large and uses a lot of attachments too). Between those 2 QNAP's you can Rsync quite easily.

You can use any machine for Rsync though. Just setup Rsync on your NAS (use port 783 and setup a user with password).
If that's done you can build some rsync scripts on your web/db server (just an example):
Code:
rsync -avzr /yourwebserver/directory --password-file=/storage/rsyncd.secrets user@your_nas_ip::dest_dir/
 
Any tip on how to do that? I have a DiskStation DS213 and would like to backup my server into it.

Does it have rsync or can it be installed? If so, you could automate a mysql dump (+ gzip) on your server and then have have your NAS rsync your entire server 10-15 min later.
 
I have a storage VPS (500GB) where I rsync the site daily/weekly/monthly and keep 30 rolling days worth of database dumps

Code:
matt@storage:/srv/samba/mattshare$ du -sh *
1.6G    Databases
21G     Z22SE
21G     Z22SE_MONTHLY
21G     Z22SE_WEEKLY

I then have a script on there which mirrors to a NAS in my house.
 
Even with the --lock-tables option?
Mydumper locks tables by default. However, sometimes mydumper cannot finish the backup job (don't ask me why, no idea :)) and the site would be down until I check it in the morning. So from my experience, I prefer to put a maintenance page, wait for 30 seconds, then start the dump. Once the backup is performed locally, I remove the maintenance page and start transferring everything to my NAS. It works flawless with this approach.
 
Mydumper locks tables by default. However, sometimes mydumper cannot finish the backup job (don't ask me why, no idea :)) and the site would be down until I check it in the morning. So from my experience, I prefer to put a maintenance page, wait for 30 seconds, then start the dump. Once the backup is performed locally, I remove the maintenance page and start transferring everything to my NAS. It works flawless with this approach.

Mydumper is a 3rd party utility or part of mysqldump?
 
Mydumper locks tables by default. However, sometimes mydumper cannot finish the backup job (don't ask me why, no idea :)) and the site would be down until I check it in the morning. So from my experience, I prefer to put a maintenance page, wait for 30 seconds, then start the dump. Once the backup is performed locally, I remove the maintenance page and start transferring everything to my NAS. It works flawless with this approach.
Are you using mydumper with MariaDB 5.2 or 5.5 or 10.0.x ?
 
No need to shutdown your forum to do a mysql dump.
I'd rather not attempt to export the tables whilst the members still have free-roam and risk corrupted or discrepancies between tables. Let's say a user were to make a new thread after I had just finished backing up the xf_posts table. My stored backup would have record of a new thread but no record of any of the posts it contains. This is, unless I mis-understand how the export facility is supposed to work.

Tell me, have I got it completely wrong? I've always closed my forum to run backups as a force of habit.
 
I'd rather not attempt to export the tables whilst the members still have free-roam and risk corrupted or discrepancies between tables. Let's say a user were to make a new thread after I had just finished backing up the xf_posts table. My stored backup would have record of a new thread but no record of any of the posts it contains. This is, unless I mis-understand how the export facility is supposed to work.

Tell me, have I got it completely wrong? I've always closed my forum to run backups as a force of habit.

If you use the "-opt" (or --lock-tables) option, all tables are locked before they are dumped to disk.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_lock-tables
 
Automatically take the site offline every night, run a slightly modified version of automysqlbackup, then rsync the gzip'd sql files and the web directories to two different backup servers in two different geographical locations.

I've been meaning to convert from automysqlbackup to percona's version for a while now, but other priorities get in the way. It only takes about 3-4 minutes to run the backup, and it happens at 3:30AM, so the downtime is no big deal.

I build testing/dev installs from the backups, so I can be certain that everything works in reverse.
 
Any tip on how to do that? I have a DiskStation DS213 and would like to backup my server into it.

If the Synology box is at your house, by all means change the SSH port to something other than 22. You can set up the firewall rules to deny IPs other than your server, so I guess that's an option too, but I wanted to be able to SSH in from anywhere, so I went that route instead.

The other problem I had was that Synology's OS is set up so that root is the only user with SSH access. You can manually edit the /etc/passwd file to allow shell access for other users, but if you're logged in as "backupuser" (or whatever), then ~/ will always point to /root, even if the user home directory options are enabled. (Edit: oddly enough, putting the authorized_keys file in /volume1/homes/user/.ssh/authorized_keys works as it should... no idea why... but I was worried at first that the Synology wouldn't find them if they weren't in /root/.ssh/)

The way I set mine up, the production webserver logs in to my Synology box at home with a shared key and uploads the backups. If I were do it over again, I would probably do it in reverse -- have a cron job on the Synology box to log into the production server and run rsync. For one thing, I didn't want my webserver having root access to my personal NAS (which resulted in the issues above with trying to create a separate shell account on the Synology NAS with restricted access). And, let's say absolute worst-case, a malicious user were to get root access to your production machine, they could easily log into your backup locations with the shared keys and delete backups from those locations as well.

But bottom line, it's as simple as writing a little shell script and putting it in /etc/cron.d/ and adding it to the crontab.

rsync -avz --delete -e "ssh -i /path/to/your/id_rsa -pPORTNO" /backup/directory --exclude /but/not/this/one user@you.synology.me:/volume1/wherever/you/store/backups

That should get you most of the way there, with your non-standard ssh port number in place of "PORTNO" (like -p22222, etc), your id_rsa path plugged in (probably ~/.ssh/id_rsa) and all your usernames and paths filled out appropriately. The example above assumes you're pushing data from the production machine to your Synology. If you wanted to run from the Synology, it would look more like this:

rsync -avz --delete -e "ssh -i ~/.ssh/id_rsa -p22222" user@yourproductionserver.com:/var/www /volume1/backuppath

But don't quote me on that second one. You'd need to test run that one in an inconspicuous place to ensure that you don't accidentally wipe your production server's webroot. I'm also not sure off the top of my head how --exclude switches work in that instance. I think they just get plugged in after the /var/www, but haven't tried and haven't looked at the rsync man page.

Edit: And if this is all jibberish to you, start by reading a few how-tos on how to create ssh logins with shared keys (passwordless). It's really easy to do. Basically just requires that you be able to run a few simple shell commands and then copy the authorized_keys file to the remote server.
 
Top Bottom