My migration of hosting and server to halve costs and easily expand storage

Mouth

Well-known member
Thought I'd share my experience of migrating my hosting/server environment, in case anyone is thinking same or needs inspiring.

Background: Been with the same (great) hosting provider for ~8yrs, upping to higher spec server models twice in that time. My audience is almost wholly Australia based, but server hosting is/was West Coast USA due to cheaper pricing and greater bandwidth allowance (certainly the case 8+ years ago).

Goal: Save money, whilst still maintaining/improving site speed/experience, and better utilise technology/infrastructure for growth and easy/quick storage (attachments) expansion. Running out of storage space being my imperative for reviewing options and change, https://xenforo.com/community/threads/what-to-do-with-growing-attachments-total-size.198695/

XF Site: 5.0GB database, 110GB storage, 2.8 million posts, 60k members, ~40k visitors/mth, ~1.8TB transfer/mth, Community forum active for 18 years.

Current Hosting: A single dedicated server; Intel e3 1270v3, 16GB memory, 2 x 480GB SSD storage, 20TB transfer. $115 /mth $USD
Services: NGINX webserver & php7.4-fpm assigned/tuned for max. 4GB memory, MySQL assigned 8GB memory, redis cache, elasticsearch assigned 1GB memory.
( The server was also used to host some other static and wordpress websites, unrelated to primary XF site/community )

My current hosting company offers dedicated servers only; no VPSs, no attachable block storage, no object storage, no firewall/DNS etc. I use external DNS, CDN, and backup targets/services.

After some exploring of options and pricing, and discussing experience with current customers, I chose https://www.linode.com/ for their Australia based data centre ( 11 total, worldwide ), product offering, support, and pricing. Using coupon code marketplace100 during sign-up gave me $100 credit for 60 days, to utilise and performance test their services to ensure it suited. And with the remaining credit balance, I'll effectively get the first month free 😁

New Hosting: 3 x Shared/VPS servers ( 2 x 2GB memory 50GB storage 2TB transfer/mth, 1 x 4GB memory 80GB storage 4TB transfer/mth ), 140GB in attached block storage, edge firewall service, private VLAN for communication (free transfer) between the servers, and external transfer TB is pooled giving me 8TB/mth total. $60 /mth $USD ( almost half what I was paying 😄 )
Usage: 1 x 2GB server for ElasticSearch, 1 x 2GB server for web/php, 1 x 4GB server for MySQL and redis cache

( I also have a couple of further Linode servers/nodes for the static and wordpress websites unrelated to primary XF site/community. $10/mth total )

Pro's:
  • Australia data centre, brings user latency to ~25ms instead of ~160ms. ( users can feel the site being slightly 'snappier' )
  • Attachable block storage, charged per GB, for quick and easy growth in attachments
  • Pooling of external TB transfer, so no limit from individual server
  • Edge firewall service, so you don't have to have a local server firewall using server resources/load
  • Private VLAN for free traffic between the servers
  • Cost saving of almost HALF of what I was previously paying
  • A terrific and very usable/friendly 'dashboard' for setup/configuration/management of your services
Con's:
  • Attachable block storage can only be connected to one server at a time
  • Load balancer service, because of the above, with multiple web/php servers is complex/risky to utilise
I'm further likely to use their object storage for backup purposes, migrating from one of my external backup target/services. Despite paying ~half and moving from a single dedicated sever to multiple shared/vps servers, my ( unscientific ) performance and stress testing showed there was no degradation in site response and user experience. Continuing to utilise the external CDN service certainly helps with that. If I find that I need to increase the response/performance of the site, then in just a matter of mins I can resize/grow the relevant server for just $5/$10 /mth additional. If Linode improve attachable block storage so that it can be connected to multiple servers, I'll likely ( or could instead ) add the load balancer service and 2 or 3 1GB web/php servers.

How can XF help in future?
  • XF's object storage functionality feels risky and unsupported, so I didn't want to utilise and rely upon it. Had I felt different, I'd consider object storage for attachments instead of attachable block storage, and use the object storage for CDN purposes too.
  • Consider code_cache and image_cache etc. for multiple web server and load balancer scenarios. Will make zero downtime and easier upgrades (server or XF) possible. Load balancers and multiple web/XF servers are commonly available hosting options nowadays.
All up, I'm happy that running out of storage space on my current server prompted/motivated me to consider ( alternative ) options and realise lower server costs and improve technology/functionality usage. The migration of XF between servers was painless and relatively easy ( I experienced an issue, unrelated to XF, meaning I had to have a 2nd/later attempt at production migration ). As for Linode, their support has been terrific and their services/platforms and user experience is great, I highly recommend them.
 

Xon

Well-known member
Be aware that linode's shared CPU plans can suffer massive "CPU steal" performance issues, depending on how overloaded the host is. With the dedicated CPU plans, support really only accepts +30% CPU steal as a reason to try to move your VM to a new host.

+10% CPU steal is really noticeable, so the +30% threshold is actually quite painful.
 

Chris D

XenForo developer
Staff member
XF's object storage functionality feels risky and unsupported, so I didn't want to utilise and rely upon it. Had I felt different, I'd consider object storage for attachments instead of attachable block storage, and use the object storage for CDN purposes too.
It is neither. The entire XF file system is abstracted meaning that the same code that powers local file system interactions is the same code that powers remote object storage.

The resource in the resource manager is merely a way to redistribute the file system adapter and the massive Amazon AWS SDK (which is bigger than XF itself, last time I checked, so we don't want to include it in our primary download). This doesn't extend or change anything in XF it just provides those files.

It is supported.

Glad to hear that on the whole your experience was positive and worth the effort. Well done.

EDIT: I should note there are improvements we are going to make in ... the future :) ... which may at least make it feel more integrated, but for now it should neither be a risk or be considered unsupported.
 

Mouth

Well-known member
Be aware that linode's shared CPU plans can suffer massive "CPU steal" performance issues, depending on how overloaded the host is.
Thanks, had heard and thus aware of that. I couldn't find any recent examples/experience of this though from existing customers, is yours first-hand recent experience? I'm prepared to resize up my servers if needed.
 

Mouth

Well-known member
It is supported.
Good to know. As a guide written by yourself, that wasn't clear to me. I'd suggest stating that within the guide, official supports tickets are accepted. Hence my usage of feels risky/unsupported. I'd also suggest it as a download in Customer Area like ES or RM, which would help make it appear more officially supported. I read of two people/sites having drastic/poor outcomes when using the object storage download/guide, but perhaps that wasn't caused by XF.
 

Alpha1

Well-known member
I had serious problems with steal between DO instances. It drove me crazy. I solved it by using one instance with volumes for dB and files.
 

Xon

Well-known member
Thanks, had heard and thus aware of that. I couldn't find any recent examples/experience of this though from existing customers, is yours first-hand recent experience? I'm prepared to resize up my servers if needed.
https://spacebattles.com used to be hosted on Linode, in one of their US datacentres, with 3 webservers and two database server. SpaceBattles is a stupidly busy site, but the real issue was the number of VMs involved.

The site database was about 70gb-90gb when hosted on SpaceBattles, which meant the minimum VM size was much larger and the dedicated CPU plan was a fairly large increase over all.

One of the major advantages of Linode or Digital Ocean is they scale down much lower than dedicated hardware traditionally does without the insane bandwidth pricing of AWS/Azure/Google Cloud.

Currently SpaceBattles is using https://webnx.com/ which offer very competitive hardware pricing, with fairly significant discounts for yearly upfront payments. But there functionally is a minimum price floor, which is likely higher a fair bit higher than your budget.
 

Kevin

Well-known member
It is supported.
Chris, good to see you chime in on this one. :) I've recently embraced Amazon's S3 using that guide* but, based on it being listed under your personal account instead of the XF account and some of the posts in that thread, I wasn't sure either if it was considered to be an official XF supported add-on going forward. Reading that the integration will be supported and get some tweaks at some point is re-assuring.


(* = that guide plus TickTackk's 'cache control' add-on' to make using AWS CloudFront CDN actually worthwhile)
 

Chris D

XenForo developer
Staff member
I’m not sure I explain it clearly or whether it’s not quite understood but it’s not really an integration. It’s just a guide on how to set it up and the Amazon SDK to interface with any S3 compatible API. There’s no code in it. Indeed if someone were to prefer you could just include the Amazon SDK yourself and not really need the download although it does include the Flysystem adapter that makes the necessary API calls.

My point is that the entire abstracted file system is natively part of XF so indeed someone could create or use alternative adapters or APIs or services to store their files be that local, or S3 compatible or something entirely different. Heck you could use FTP if you wanted to… but that would be dumb for obvious reasons.

Anyway just wanted to clear that up in case it wasn’t totally clear.

Regardless the resource is now owned by XF to ensure it seems official.
 

Sim

Well-known member
I've been on Linode for years and haven't noticed much in the way of CPU steal - but then, I'm also not running really high volume servers.

My ZooChat server runs at much higher CPU usage than PropertyChat, even though traffic is around 50% lower. I put the CPU usage down to the additional load introduced by having over 184 forum nodes, 2,704 thread prefixes, 3,128 gallery categories and a bunch of customisations that are not really optimised (and still running on XF1.5). I'm very keen to see how the site performs once I've migrated to XF2.x

Here are some charts from the past week - these are both 8GB Linodes running nginx/php-fpm/mysql:

Singapore:

1634940369240.png

Newark:

1634940387596.png
 

nicodak

Well-known member
I understood absolutely nothing of everything that was said in this discussion.

But it suddenly makes me understand the interest of XF Cloud. I don't have a forum that currently requires a huge server, I guess it could run on an Amstrad CPC 464 but if one day I am successful I will definitely go to XF's cloud solution.
 

eva2000

Well-known member
Australia data centre, brings user latency to ~25ms instead of ~160ms. ( users can feel the site being slightly 'snappier' )
oh didn't know you had an Aussie based site!

Thanks for sharing - always great to read how other folks are setting up their operations :D
 

motowebmaster

Well-known member
Went on a similar journey a year ago.

Regarding Firewalls, I'm taking advantage of my cloud provider's firewall feature but still run CSF on my server. I think it is wonderful that the basic port-filtering firewall feature is becoming the norm among cloud hosting providers, but CSF is still picking up a fair amount. Rules set to temporarily block based on what they were doing, and permanently block if they get out of hand - average 40 baddies temporarily blocked per day.

It's interesting to see what can be done in today's modern cloud options.
 

m1ne

Well-known member
Good story, I enjoy reading things like this.

Actually, I did something similar myself recently as well for my employer.
Prior to me joining the company, my boss was paying over $100/month for "business hosting" which performed really badly courtesy of cPanel and Apache.
Switched him over to a $24/mo Vultr High Frequency server with DirectAdmin + OpenLiteSpeed. Result: blazing fast hosting for 58 WordPress websites whilst saving $80+ monthly. Maybe that's why I got a raise last month 🤔
 
Top