F
Jack looks at the available forum software available (due diligence) which include a native importer and which will incur the least resistance and downtime. Jack makes an informed decision and does not use XF due to the lack of decent and native importers.Jack is a forum administrator running phpBB. Jack's site is a medium sized hobby site, with 750k posts (not quite classed as a big board). Jack has heard about XenForo and would like to convert. Jack saves up and pays the $140 for the XenForo licence out of his own pocket. Jack has no more money to hire help, so he tries to go it alone. Jack doesn't know about XenForo's CLI importer, and even if he did it wouldn't work for phpBB. The convert takes over a day for him to do, during which time his members are constantly bugging him "When is fightclubfans.net back, Jack?", and by the end of the process it he's wishing he'd never started. What a great way for Jack to be welcomed into the world of XenForo. I am Jacks advocate, is all.
The question is though, could a non technical person run the CLI importer?
My point is made, anyway. I'm not going to argue over it, there is no point, I've said all I wanna say.
*Jack is fictional.
sounds like for mysql restores and backups you want to use multi-threaded mysql backup/restore tool called mydumper http://www.mydumper.org/ it's 3x to 10x faster than mysqldump/restore http://vbtechsupport.com/1716/ but you'd have to had backed up database via mydumper to take advantage of multi-threaded restore speed up.Makes me wonder if MySQL has some sort of multi-threading functionality build-in for importing SQL data, technically it should definitely be possible but it'd have to load the entire SQL file into RAM just to be able to segment the data, which I don't believe it does.
Backing up data size: 22,435 MBin size with on disk size of 23GB.MySQL backup speed
- mysqldump backup: 22,435MB backed up in 444.21 seconds = 50.50 MB/s or 177.53 GB/hr
- mydumper backup 12 threads: 22,435MB backed up in 165.61 seconds = 135.46 MB/s or 476.22 GB/hr
MySQL restore speed
- mysql restore: 18,397 MB sql file restored in 1261.77 seconds = 14.58 MB/s or 51.25 GB/hr
- myloader restore 12 threads: 18,528 MB sql files cumulative size restored in 353.62 seconds = 53.92 MB/s or 189.56 GB/hr
sounds like for mysql restores and backups you want to use multi-threaded mysql backup/restore tool called mydumper http://www.mydumper.org/ it's 3x to 10x faster than mysqldump/restore http://vbtechsupport.com/1716/ but you'd have to had backed up database via mydumper to take advantage of multi-threaded restore speed up.
This is where it's nice being on a cloud server setup that's billed hourly. Increase the resources to the max allowed, process the import/rebuild, and then decrease it back to normal.When the site matched the hosting, and the hosting didn't suck, the converts went nice and smooth, regardless of size.
You can install mydumper through yum, from Axivo repository:sounds like for mysql restores and backups you want to use multi-threaded mysql backup/restore tool called mydumper http://www.mydumper.org/ it's 3x to 10x faster than mysqldump.
Right now is available on Redhat 5 repository, I will have that built for Redhat 6 when I get a bit of free time on hand.yum --enablerepo=axivo install mydumper
$ mydumper -?
Usage:
mydumper [OPTION...] multi-threaded MySQL dumping
Help Options:
-?, --help Show help options
Application Options:
-B, --database Database to dump
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-o, --outputdir Directory to output files to, default ./export-*/
-s, --statement-size Attempted size of INSERT statement in bytes, default 1000000
-r, --rows Try to split tables into chunks of this many rows
-c, --compress Compress output files
-e, --build-empty-files Build dump files even if no data available from table
-x, --regex Regular expression for 'db.table' matching
-i, --ignore-engines Comma delimited list of storage engines to ignore
-m, --no-schemas Do not dump table schemas with the data
-l, --long-query-guard Set long query timer in seconds, default 60
-k, --kill-long-queries Kill long running queries (instead of aborting)
-b, --binlogs Get the binary logs as well as dump data
-d, --binlog-outdir Directory to output the binary logs to, default ./export/binlogs/
-h, --host The host to connect to
-u, --user Username with privileges to run the dump
-p, --password User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
$ myloader -?
Usage:
myloader [OPTION...] multi-threaded MySQL loader
Help Options:
-?, --help Show help options
Application Options:
-d, --directory Directory of the dump to import
-q, --queries-per-transaction Number of queries per transaction, default 1000
-o, --overwrite-tables Drop tables if they already exist
-B, --database An alternative database to restore into
-e, --enable-binlog Enable binary logging of the restore data
-h, --host The host to connect to
-u, --user Username with privileges to run the dump
-p, --password User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
It will create a /export-20120505-004931 directory where your backup is present.$ mydumper -c -t 8 -v 3 -B axivo -u root -p somepassword
** Message: Connected to a MySQL server
** Message: Started dump at: 2012-05-05 00:49:31
** Message: Thread 1 dumping data for `axivo`.`xf_addon`
** Message: Thread 3 dumping data for `axivo`.`xf_admin`
** Message: Thread 4 dumping data for `axivo`.`xf_admin_navigation`
** Message: Thread 5 dumping data for `axivo`.`xf_admin_permission`
** Message: Thread 6 dumping data for `axivo`.`xf_admin_permission_entry`
** Message: Thread 7 dumping data for `axivo`.`xf_admin_template`
** Message: Thread 8 dumping data for `axivo`.`xf_admin_template_include`
...
** Message: Thread 8 dumping data for `axivo`.`xf_warning`
** Message: Thread 8 dumping data for `axivo`.`xf_warning_action`
** Message: Thread 3 dumping data for `axivo`.`xf_warning_action_trigger`
** Message: Thread 5 dumping data for `axivo`.`xf_warning_definition`
** Message: Thread 2 dumping schema for `axivo`.`xf_addon`
** Message: Thread 4 dumping schema for `axivo`.`xf_admin`
** Message: Thread 2 dumping schema for `axivo`.`xf_admin_log`
** Message: Thread 2 dumping schema for `axivo`.`xf_admin_navigation`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_permission`
** Message: Thread 4 dumping schema for `axivo`.`xf_admin_permission_entry`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_search_type`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_template`
** Message: Thread 8 dumping schema for `axivo`.`xf_admin_template_compiled`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_template_include`
...
** Message: Thread 8 dumping schema for `axivo`.`xf_warning`
** Message: Thread 7 dumping schema for `axivo`.`xf_warning_action`
** Message: Thread 8 dumping schema for `axivo`.`xf_warning_action_trigger`
** Message: Thread 5 dumping schema for `axivo`.`xf_warning_definition`
** Message: Thread 3 shutting down
** Message: Thread 7 shutting down
** Message: Thread 2 shutting down
** Message: Thread 8 shutting down
** Message: Thread 6 shutting down
** Message: Thread 4 shutting down
** Message: Thread 5 shutting down
** Message: Thread 1 shutting down
** Message: Non-InnoDB dump complete, unlocking tables
** Message: Finished dump at: 2012-05-05 00:49:32
You can install mydumper through yum, from Axivo repository:
Right now is available on Redhat 5 repository, I will have that built for Redhat 6 when I get a bit of free time on hand.
When RM will be released, I will be able to build an auto importer from Sqlite to MySQL, so the repodata is automatically displayed into categories, allowing everyone to see what the repository contains. Sure thing you can list the contents now with yum, but it will look fancier in RM.
Dump Usage:
Code:$ mydumper -? Usage: mydumper [OPTION...] multi-threaded MySQL dumping Help Options: -?, --help Show help options Application Options: -B, --database Database to dump -T, --tables-list Comma delimited table list to dump (does not exclude regex option) -o, --outputdir Directory to output files to, default ./export-*/ -s, --statement-size Attempted size of INSERT statement in bytes, default 1000000 -r, --rows Try to split tables into chunks of this many rows -c, --compress Compress output files -e, --build-empty-files Build dump files even if no data available from table -x, --regex Regular expression for 'db.table' matching -i, --ignore-engines Comma delimited list of storage engines to ignore -m, --no-schemas Do not dump table schemas with the data -l, --long-query-guard Set long query timer in seconds, default 60 -k, --kill-long-queries Kill long running queries (instead of aborting) -b, --binlogs Get the binary logs as well as dump data -d, --binlog-outdir Directory to output the binary logs to, default ./export/binlogs/ -h, --host The host to connect to -u, --user Username with privileges to run the dump -p, --password User password -P, --port TCP/IP port to connect to -S, --socket UNIX domain socket file to use for connection -t, --threads Number of threads to use, default 4 -C, --compress-protocol Use compression on the MySQL connection -V, --version Show the program version and exit -v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
Restore Usage:
Code:$ myloader -? Usage: myloader [OPTION...] multi-threaded MySQL loader Help Options: -?, --help Show help options Application Options: -d, --directory Directory of the dump to import -q, --queries-per-transaction Number of queries per transaction, default 1000 -o, --overwrite-tables Drop tables if they already exist -B, --database An alternative database to restore into -e, --enable-binlog Enable binary logging of the restore data -h, --host The host to connect to -u, --user Username with privileges to run the dump -p, --password User password -P, --port TCP/IP port to connect to -S, --socket UNIX domain socket file to use for connection -t, --threads Number of threads to use, default 4 -C, --compress-protocol Use compression on the MySQL connection -V, --version Show the program version and exit -v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
This is how I backup the database:
It will create a /export-20120505-004931 directory where your backup is present.
I know a site (not mine) that would be interested in moving toward XenForo. Currently using phpBB
But with over 1 Billion post and 100 Million members; that maybe an issue.
At that number it "may be" an issue regardless of what software you're moving to Are you sure you're not exaggerating those numbers?
has
2,048,129,272
articles posted with
26,013,251
registered users.
Doubt they'll convert to XenForo, they don't even use phpBB as such anymore. That's just a "fallacy people like to claim saying the biggest board on the web uses phpBBB" Once read a long article about that community and they pretty much stripped what was phpBB out and built their own (everything) for it. They even trim away god knows how many posts each day from the databases(s). They have numerous data-centres running that site, otherwise they'd far exceed those posting figures.
Stripped it of a lot of code (A LOT is an understatement) and added a few optimized things along the way. You are correct.Doubt they'll convert to XenForo, they don't even use phpBB as such anymore. That's just a "fallacy people like to claim saying the biggest board on the web uses phpBB" Once read a long article about that community written by the owners of it, they pretty much stripped what was phpBB out and built their own (everything) for it. They even trim away god knows how many posts each day from the databases(s). They have numerous data-centres running that site, otherwise they'd far exceed those posting figures.
We use essential cookies to make this site work, and optional cookies to enhance your experience.