What is your development/staging/production workflow?

DeltaHF

Well-known member
What development/staging/production workflow do you use to maintain your site?

I am looking to update and modernize my workflow as I move my community to XF 2.2 and I'm curious to see what others are doing. Is there is a recommended best practice? My site is far too large (400+ GB, including databases and attachments) for a "cowboy coding" workflow any more, and I am struggling to understand how I can maintain consistency between different environments.

This feels like a bit of a blind spot to me: it's something we all do or need to do, but I don't see people talking about it much. Local development environments get most of the attention, but managing a production workflow for a real site is more complex (hence the reason I am posting this thread here and not in the development forum).

I'm currently planning the following workflow for my site with WordPress and XF 2.2:
  • Set up a testing server which I can run my own VMs on (I will be using a Synology DS920+).
  • Get a working copy of my site on a VM (I use @eva2000's Centminmod).
  • Delete a significant amount of data from my forums on this VM. This allows me to keep a subset of real users and content while making the size of the data more manageable.
  • Clone / snapshot this reduced-size "staging" VM as needed to test updates, plugins, custom development, etc.
  • Edit and manage the files remotely using VS Code Remote Extension via SSH.
  • Push changes from the staging VM to my live production server via Git.
Any thoughts, advice, or feedback on my plans would be appreciated.
 
Yes, but doesn’t cloning your production environment just open up an entirely different can of worms?
  • How do you manage access control to the cloned site?
  • How do you ensure you’ve disabled email sending from the cloned VM?
  • How do you merge code changes back into your production environment? (Sure, you can clone and test upgrades easily, but what if you want to work on a new feature or test other changes to your site?)
  • How would you manage that with a large site or a dedicated server? For my site, Linode or DO would cost twice as much per month compared to my dedicated server, and paying for the cloned staging environments could get very expensive.
 
For different web host cloning, you'd have to script the process. I am working on a Centmin Mod to Centmin Mod LEMP stack full server data (nginx site data + MySQL data) transfer routine/scripts privately too ;) So when that is eventually ready, transfer speed should be almost as fast as your network line speed/disk speed :) I've tested it so far between 100-350MB/s transfer speed. FYI, Linode VPS also have between 1-12Gbp/s network speeds for outbound and 40Gbp/s network inbound speeds so transfers are faster than most VPS providers or dedicated providers on 1Gbp/s networks.

Access control for who ? I generally do things myself so only person with access would be me.

Xenforo config file has option to disable email sending
  • $config['enableMail'] = false;
at https://xenforo.com/xf2-docs/manual/config/ same as XF 1.5 https://xenforo.com/xf1-docs/manual/config/

How do you merge code changes back into your production environment?
Just repeat the documented changes/upgrade steps on production as you did on live with the help of code version if applicable (git etc). The operative word is documented changes. If you do this right, you can use any staging environment easily :)

How would you manage that with a large site or a dedicated server? For my site, Linode or DO would cost twice as much per month compared to my dedicated server, and paying for the cloned staging environments could get very expensive.

Yeah that really depends on what's important to you and costs are relative though so could be dedicated or vps cloud hosting that might better suit your needs. For major upgrades for software and even sometimes server software, I usually just clone the server for a test run first - the staging server isn't meant to be running forever. That's why I love hourly billed VPS and dedicated servers, I can spin up a clone for a few hours or days and still would be relatively cheaper than committing to renting a full VPS/dedicated server for a month(s). I have 150+ GB data on Linode VPS and takes ~35-50 minutes to clone and be ready to use in an exact identical state. So I can even do a separate clone for each test which may last 1-24hrs and only pay for 2-24hrs as opposed to renting for a full month and then just destroy the staging server after I am done testing. A Linode 32GB ram 8 core, 640GB VPS is only US$0.24/hr so 2-24hrs = US$0.48 to US$5.76 cost. Even extending that to 2 weeks testing is only 14x 5.76 = US$80.64 all up. Much cheaper than spec'ing a server for full month rental to accommodate 640GB fast SSD disk storage.

It's how I will be doing my XF 1.5 to 2.1 update and 2.1 to 2.2 via cloning process. Though I'll also have a local Virtualbox guest server setup too and much easier now that I have recently upgraded my ISP connection from 100/5 cable to 100/40 fibre for upload speed boost :D

For even large jobs, I can even spin up on my local test server setup which is quite old now - dual Xeon E5-2650v1 https://community.centminmod.com/threads/dual-intel-xeon-e5-2650v1-64gb-ram-asus-z9pe-d8-ws.10725/

My site is far too large (400+ GB, including databases and attachments) for a "cowboy coding" workflow any more, and I am struggling to understand how I can maintain consistency between different environments.
Delete a significant amount of data from my forums on this VM. This allows me to keep a subset of real users and content while making the size of the data more manageable.

Just these 2 are at odds with each other, you wouldn't be able to maintain a real staging environment without replicating the real data set i.e. timing how long an upgrade would run or resources used by upgrade process or custom addons and their MySQL queries' performance wouldn't be replicable on the reduced data set as opposed to the real sized data set.
 
Last edited:
Access control for who ? I generally do things myself so only person with access would be me.

For the public, as you're running a staging/development server on the open internet. So you go into each cloned VM and edit firewall rules and the forum config files every time you spin up another server? That seems like a lot of manual work and time.

I thought there would be a better way, but considering the lack of responses here and your own workflow, I guess everyone is just slogging around huge machine images of their production sites and maintaining scripts and checklists to control the different environments. That's what I thought might be happening, but I wasn't sure. 😅

Just these 2 are at odds with each other, you wouldn't be able to maintain a real staging environment without replicating the real data set i.e. timing how long an upgrade would run or resources used by upgrade process or custom addons and their MySQL queries' performance wouldn't be replicable on the reduced data set as opposed to the real sized data set.

I should clarify the minimized sites/VMs would be for development, not staging. I like developing with real-world content and data, but cloning 400+ GB machines more than a few times isn't really practical.

This has been my biggest headache doing development on my local machine, as the disk space for just a single development VM that has a copy of my production data takes up a good chunk of my 2TB internal storage.
 
For the public, as you're running a staging/development server on the open internet. So you go into each cloned VM and edit firewall rules and the forum config files every time you spin up another server? That seems like a lot of manual work and time.

No need if you're behind Cloudflare. You can use Cloudflare Access (and now Cloudflare Access For Teams) to secure staging site/servers to restrict access. I use Cloudflare Access to lock down my XF staging servers and my Wordpress and Xenforo admin login/side as well. Makes light work of securing sites https://www.cloudflare.com/teams/access/ :)

FYI, CF Access documentation https://developers.cloudflare.com/access/

I should clarify the minimized sites/VMs would be for development, not staging. I like developing with real-world content and data, but cloning 400+ GB machines more than a few times isn't really practical.

If it's just development and not staging then data set may not need to be identical. In which case, you could have a fresh Xenforo dummy install with test data populated. One of my test Xenforo 2.1 installs is just populated with RSS feed imports and now at around 100K posts and growing.

But that's why I say cost is relative to the user. To a few of my larger clients, having a second or third staging server 24/7 is a non-negotiable on their end and factored into their costs of running the forum/site. It's not an optional component, but apart of total running costs.
This has been my biggest headache doing development on my local machine, as the disk space for just a single development VM that has a copy of my production data takes up a good chunk of my 2TB internal storage.
Yeah my local Virtualbox guest servers take up around 550GB of my disk space and counting.

I thought there would be a better way, but considering the lack of responses here and your own workflow, I guess everyone is just slogging around huge machine images of their production sites and maintaining scripts and checklists to control the different environments. That's what I thought might be happening, but I wasn't sure. 😅
Pretty sure majority of folks are just doing development on their live production sites and winging it and hoping for the best.
 
Last edited:
Could have used this thread a few weeks ago! We actually just came up with what I think is a pretty decent setup for production/staging systems that integrate Github Actions. There are a few remaining issues that are hard to handle via code repositories:
  1. /data/, attachments, avatars, etc. These are all handled outside the repo so hard to work this into any flow. But, not too hard to set up an rsync process between production and staging/development if needed.
  2. Database. Again this is something very hard to set up a process for. It can be again scripted to copy over from production, but depending on size this is a big issue. Before we migrated to XF I had created an export process to gather a subset of forums and threads (and related users) so that this was a manageable size, but haven't yet for XF.
 
If you have the luxury / foresight of planning ahead:
  • all customizations contained within addons (even if it's one big custom one):
    • git managed
    • addon contains migrations and what not
  • work locally, and a production deploy is just upgrading the addon(s) via script
  • optionally, do same upgrade process on a staging site (stock XF + addons)
I'm only working on one mediumish XF site and this works very well. The deploy process just works and I can do as much testing / workflow as I need before releasing and deploying it.

I've worked on some massive sites, and it's very rare that testing on a prod-like deployment is useful (read: worth the overhead). You just need to build and test for a site of that scale (optimize / review changes to data model or queries ahead of time, etc.). In some cases, I've written scripts to mass-generate garbage / test data to simulate data size, but that does not simulate load.

That staging environment might be useful to integration test the various add-ons with your specific setup (forum structure, permissions, etc.). That's hard to simulate locally. But trying to download prod, and obfuscate the data usually never ends well. Data leaks, accidental communications, etc. I'd rather have a clean environment, or a generated one instead.
 
Top Bottom