7 days to upgrade Spacebattles

Jack is a forum administrator running phpBB. Jack's site is a medium sized hobby site, with 750k posts (not quite classed as a big board). Jack has heard about XenForo and would like to convert. Jack saves up and pays the $140 for the XenForo licence out of his own pocket. Jack has no more money to hire help, so he tries to go it alone. Jack doesn't know about XenForo's CLI importer, and even if he did it wouldn't work for phpBB. The convert takes over a day for him to do, during which time his members are constantly bugging him "When is fightclubfans.net back, Jack?", and by the end of the process it he's wishing he'd never started. What a great way for Jack to be welcomed into the world of XenForo. I am Jacks advocate, is all.

The question is though, could a non technical person run the CLI importer?


My point is made, anyway. I'm not going to argue over it, there is no point, I've said all I wanna say. :)

*Jack is fictional.
Jack looks at the available forum software available (due diligence) which include a native importer and which will incur the least resistance and downtime. Jack makes an informed decision and does not use XF due to the lack of decent and native importers.

If Jack made the decision to go with XF by migrating from phpBB without knowing there is no direct converter then more fool him. Sorry, research makes up 90% of every implementation. Period. Jack needs to retire :)
 
Makes me wonder if MySQL has some sort of multi-threading functionality build-in for importing SQL data, technically it should definitely be possible but it'd have to load the entire SQL file into RAM just to be able to segment the data, which I don't believe it does.
sounds like for mysql restores and backups you want to use multi-threaded mysql backup/restore tool called mydumper http://www.mydumper.org/ it's 3x to 10x faster than mysqldump/restore http://vbtechsupport.com/1716/ but you'd have to had backed up database via mydumper to take advantage of multi-threaded restore speed up.

Backing up data size: 22,435 MB
in size with on disk size of 23GB.​
MySQL backup speed
  • mysqldump backup: 22,435MB backed up in 444.21 seconds = 50.50 MB/s or 177.53 GB/hr
  • mydumper backup 12 threads: 22,435MB backed up in 165.61 seconds = 135.46 MB/s or 476.22 GB/hr
MySQL restore speed
  • mysql restore: 18,397 MB sql file restored in 1261.77 seconds = 14.58 MB/s or 51.25 GB/hr
  • myloader restore 12 threads: 18,528 MB sql files cumulative size restored in 353.62 seconds = 53.92 MB/s or 189.56 GB/hr
 
sounds like for mysql restores and backups you want to use multi-threaded mysql backup/restore tool called mydumper http://www.mydumper.org/ it's 3x to 10x faster than mysqldump/restore http://vbtechsupport.com/1716/ but you'd have to had backed up database via mydumper to take advantage of multi-threaded restore speed up.

Woah that's awesome, can't believe I've never come across this. Thanks for sharing!
 
When the site matched the hosting, and the hosting didn't suck, the converts went nice and smooth, regardless of size.
This is where it's nice being on a cloud server setup that's billed hourly. Increase the resources to the max allowed, process the import/rebuild, and then decrease it back to normal. :p
 
sounds like for mysql restores and backups you want to use multi-threaded mysql backup/restore tool called mydumper http://www.mydumper.org/ it's 3x to 10x faster than mysqldump.
You can install mydumper through yum, from Axivo repository:
yum --enablerepo=axivo install mydumper
Right now is available on Redhat 5 repository, I will have that built for Redhat 6 when I get a bit of free time on hand.
When RM will be released, I will be able to build an auto importer from Sqlite to MySQL, so the repodata is automatically displayed into categories, allowing everyone to see what the repository contains. Sure thing you can list the contents now with yum, but it will look fancier in RM. :)

Dump Usage:
Code:
$ mydumper -?
Usage:
  mydumper [OPTION...] multi-threaded MySQL dumping
 
Help Options:
  -?, --help                  Show help options
 
Application Options:
  -B, --database              Database to dump
  -T, --tables-list          Comma delimited table list to dump (does not exclude regex option)
  -o, --outputdir            Directory to output files to, default ./export-*/
  -s, --statement-size        Attempted size of INSERT statement in bytes, default 1000000
  -r, --rows                  Try to split tables into chunks of this many rows
  -c, --compress              Compress output files
  -e, --build-empty-files    Build dump files even if no data available from table
  -x, --regex                Regular expression for 'db.table' matching
  -i, --ignore-engines        Comma delimited list of storage engines to ignore
  -m, --no-schemas            Do not dump table schemas with the data
  -l, --long-query-guard      Set long query timer in seconds, default 60
  -k, --kill-long-queries    Kill long running queries (instead of aborting)
  -b, --binlogs              Get the binary logs as well as dump data
  -d, --binlog-outdir        Directory to output the binary logs to, default ./export/binlogs/
  -h, --host                  The host to connect to
  -u, --user                  Username with privileges to run the dump
  -p, --password              User password
  -P, --port                  TCP/IP port to connect to
  -S, --socket                UNIX domain socket file to use for connection
  -t, --threads              Number of threads to use, default 4
  -C, --compress-protocol    Use compression on the MySQL connection
  -V, --version              Show the program version and exit
  -v, --verbose              Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

Restore Usage:
Code:
$ myloader -?
Usage:
  myloader [OPTION...] multi-threaded MySQL loader
 
Help Options:
  -?, --help                        Show help options
 
Application Options:
  -d, --directory                  Directory of the dump to import
  -q, --queries-per-transaction    Number of queries per transaction, default 1000
  -o, --overwrite-tables            Drop tables if they already exist
  -B, --database                    An alternative database to restore into
  -e, --enable-binlog              Enable binary logging of the restore data
  -h, --host                        The host to connect to
  -u, --user                        Username with privileges to run the dump
  -p, --password                    User password
  -P, --port                        TCP/IP port to connect to
  -S, --socket                      UNIX domain socket file to use for connection
  -t, --threads                    Number of threads to use, default 4
  -C, --compress-protocol          Use compression on the MySQL connection
  -V, --version                    Show the program version and exit
  -v, --verbose                    Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

This is how I backup the database:
$ mydumper -c -t 8 -v 3 -B axivo -u root -p somepassword
** Message: Connected to a MySQL server
** Message: Started dump at: 2012-05-05 00:49:31

** Message: Thread 1 dumping data for `axivo`.`xf_addon`
** Message: Thread 3 dumping data for `axivo`.`xf_admin`
** Message: Thread 4 dumping data for `axivo`.`xf_admin_navigation`
** Message: Thread 5 dumping data for `axivo`.`xf_admin_permission`
** Message: Thread 6 dumping data for `axivo`.`xf_admin_permission_entry`
** Message: Thread 7 dumping data for `axivo`.`xf_admin_template`
** Message: Thread 8 dumping data for `axivo`.`xf_admin_template_include`

...

** Message: Thread 8 dumping data for `axivo`.`xf_warning`
** Message: Thread 8 dumping data for `axivo`.`xf_warning_action`
** Message: Thread 3 dumping data for `axivo`.`xf_warning_action_trigger`
** Message: Thread 5 dumping data for `axivo`.`xf_warning_definition`
** Message: Thread 2 dumping schema for `axivo`.`xf_addon`
** Message: Thread 4 dumping schema for `axivo`.`xf_admin`
** Message: Thread 2 dumping schema for `axivo`.`xf_admin_log`
** Message: Thread 2 dumping schema for `axivo`.`xf_admin_navigation`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_permission`
** Message: Thread 4 dumping schema for `axivo`.`xf_admin_permission_entry`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_search_type`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_template`
** Message: Thread 8 dumping schema for `axivo`.`xf_admin_template_compiled`
** Message: Thread 3 dumping schema for `axivo`.`xf_admin_template_include`

...

** Message: Thread 8 dumping schema for `axivo`.`xf_warning`
** Message: Thread 7 dumping schema for `axivo`.`xf_warning_action`
** Message: Thread 8 dumping schema for `axivo`.`xf_warning_action_trigger`
** Message: Thread 5 dumping schema for `axivo`.`xf_warning_definition`
** Message: Thread 3 shutting down
** Message: Thread 7 shutting down
** Message: Thread 2 shutting down
** Message: Thread 8 shutting down
** Message: Thread 6 shutting down
** Message: Thread 4 shutting down
** Message: Thread 5 shutting down
** Message: Thread 1 shutting down
** Message: Non-InnoDB dump complete, unlocking tables
** Message: Finished dump at: 2012-05-05 00:49:32
It will create a /export-20120505-004931 directory where your backup is present.
 

Jack wears kilts:

photo.jpg
 
Jack - was too young and inexperienced using forums really to attempt converting a database himself.

jack_jack_500_thumb.jpg
 
You can install mydumper through yum, from Axivo repository:

Right now is available on Redhat 5 repository, I will have that built for Redhat 6 when I get a bit of free time on hand.
When RM will be released, I will be able to build an auto importer from Sqlite to MySQL, so the repodata is automatically displayed into categories, allowing everyone to see what the repository contains. Sure thing you can list the contents now with yum, but it will look fancier in RM. :)

Dump Usage:
Code:
$ mydumper -?
Usage:
  mydumper [OPTION...] multi-threaded MySQL dumping
 
Help Options:
  -?, --help                  Show help options
 
Application Options:
  -B, --database              Database to dump
  -T, --tables-list          Comma delimited table list to dump (does not exclude regex option)
  -o, --outputdir            Directory to output files to, default ./export-*/
  -s, --statement-size        Attempted size of INSERT statement in bytes, default 1000000
  -r, --rows                  Try to split tables into chunks of this many rows
  -c, --compress              Compress output files
  -e, --build-empty-files    Build dump files even if no data available from table
  -x, --regex                Regular expression for 'db.table' matching
  -i, --ignore-engines        Comma delimited list of storage engines to ignore
  -m, --no-schemas            Do not dump table schemas with the data
  -l, --long-query-guard      Set long query timer in seconds, default 60
  -k, --kill-long-queries    Kill long running queries (instead of aborting)
  -b, --binlogs              Get the binary logs as well as dump data
  -d, --binlog-outdir        Directory to output the binary logs to, default ./export/binlogs/
  -h, --host                  The host to connect to
  -u, --user                  Username with privileges to run the dump
  -p, --password              User password
  -P, --port                  TCP/IP port to connect to
  -S, --socket                UNIX domain socket file to use for connection
  -t, --threads              Number of threads to use, default 4
  -C, --compress-protocol    Use compression on the MySQL connection
  -V, --version              Show the program version and exit
  -v, --verbose              Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

Restore Usage:
Code:
$ myloader -?
Usage:
  myloader [OPTION...] multi-threaded MySQL loader
 
Help Options:
  -?, --help                        Show help options
 
Application Options:
  -d, --directory                  Directory of the dump to import
  -q, --queries-per-transaction    Number of queries per transaction, default 1000
  -o, --overwrite-tables            Drop tables if they already exist
  -B, --database                    An alternative database to restore into
  -e, --enable-binlog              Enable binary logging of the restore data
  -h, --host                        The host to connect to
  -u, --user                        Username with privileges to run the dump
  -p, --password                    User password
  -P, --port                        TCP/IP port to connect to
  -S, --socket                      UNIX domain socket file to use for connection
  -t, --threads                    Number of threads to use, default 4
  -C, --compress-protocol          Use compression on the MySQL connection
  -V, --version                    Show the program version and exit
  -v, --verbose                    Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2

This is how I backup the database:

It will create a /export-20120505-004931 directory where your backup is present.

Thanks, I'm more of a Debian man myself :p
 
I used to prefer CentOS to Debian, and then I gave Debian an honest shot, and I've never gone back. It's so much more secure and stable, and it just *feels* more robust. Everything about its nature and structure makes so much sense that I do everything administrative via command-line, whereas I used to use a lot of stuff like DirectAdmin back on CentOS. And you have to love that it's community-driven. Debian updates to the next version when the community feels it's a new product that is whole and ready, whereas CentOS moves forward when the shareholders/board members say so. Debian represents what I think all Linux flavors *should* be, the whole open-sourced philosophy behind its kernel's conception. But that's the beauty, right? That anyone can choose what their Linux is and how it works. Also, Debian names its releases after Toy Story characters, which I feel adds a level of humor and humanism to it that I can't help but respect. Also, aptitude is *so* much better than yum and rpms. I can't tell you how many times I've had a working system, tried to install something, and had it all go haywire. It's honestly easier to compile more obscure things on CentOS from source than it is sometimes to try to use either of the native package managers. Any time I've had problems with aptitude/apt-get, its built-in commands for fixing itself worked wonders.

That all said, both CentOS and Debian are great operating systems for servers, resource-wise. I just greatly prefer Debian.

On-topic: I don't think this is going anywhere, but I don't think you all should be arguing. Rather than disagree or try to tell people that mods might close their thread, just be polite and constructive, or say nothing. If a mod is going to close the thread, let them, and let it be because of the thread itself, rather than everyone stooping to childish levels arguing. Leave staff decisions to staff- no one likes would-be stand-ins with no authority trying to tell others what to do. :-/
 
I know a site (not mine) that would be interested in moving toward XenForo and is currently using phpBB

But with over 1 Billion post and 27 Million members; that maybe an issue.
 
I know a site (not mine) that would be interested in moving toward XenForo. Currently using phpBB

But with over 1 Billion post and 100 Million members; that maybe an issue.

At that number it "may be" an issue regardless of what software you're moving to :p Are you sure you're not exaggerating those numbers?
 
At that number it "may be" an issue regardless of what software you're moving to :p Are you sure you're not exaggerating those numbers?
has
2,048,129,272

articles posted with
26,013,251

registered users.


IF such a change took place; it would not be until late 2012 or 2013
 
Doubt they'll convert to XenForo, they don't even use phpBB as such anymore. That's just a "fallacy people like to claim saying the biggest board on the web uses phpBB" Once read a long article about that community written by the owners of it, they pretty much stripped what was phpBB out and built their own (everything) for it. They even trim away god knows how many posts each day from the databases(s). They have numerous data-centres running that site, otherwise they'd far exceed those posting figures.
 
Doubt they'll convert to XenForo, they don't even use phpBB as such anymore. That's just a "fallacy people like to claim saying the biggest board on the web uses phpBBB" Once read a long article about that community and they pretty much stripped what was phpBB out and built their own (everything) for it. They even trim away god knows how many posts each day from the databases(s). They have numerous data-centres running that site, otherwise they'd far exceed those posting figures.

Yeah at that size you kind of HAVE to customize the software to a great extend. Most bulletin boards do support scaling across servers but rarely are they fully optimized for that sort of setup.
 
Doubt they'll convert to XenForo, they don't even use phpBB as such anymore. That's just a "fallacy people like to claim saying the biggest board on the web uses phpBB" Once read a long article about that community written by the owners of it, they pretty much stripped what was phpBB out and built their own (everything) for it. They even trim away god knows how many posts each day from the databases(s). They have numerous data-centres running that site, otherwise they'd far exceed those posting figures.
Stripped it of a lot of code (A LOT is an understatement) and added a few optimized things along the way. You are correct.

But at its core it is still an underline of phpBB. And uses more resources alone (idle) vs XenForo (stripped).

There is "an idea" (thought) of switching. It is only an idea and maybe will not happen at all.

But that concept will not progress past "an idea" until the current legal issues (Internet Brands vs XenForo) are a thing of the past. Even then, the admin(s) would need to "play" with things and see what possible things could be done.

It is only a thought at this time. Nothing more.
 
Top Bottom