What to do with growing attachments total size?

Mouth

Well-known member
My site size has grown to 109Gb, data/attachments representing 26Gb and internal_data/attachments representing 81Gb.
Server storage available space thus becoming an issue, particularly for generating/holding backups (all stored offsite).

How have others dealt with this? Is DO Spaces or Amazon S3 via Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.x is the best way to move the files elsewhere and thus lower the server's space. I don't think that adding an extra HDD to the server is the best idea. Other options?
 
If image quality is not that important, use one of the add-on that optimize attachments.

Object storage is a great alternative esp if you use a CDN. Cloudflare just announced their own S3 compatible service today. Should make things interesting.

Hopefully xenforo would add webp support in future and someone would release an add-on to convert all image attachments to webp. That would save a lot of storage space for a lot of forum owners.
 
Here's what i did

optimize locally with the TH plugin
moved everything to s3
enabled s3
put cloudfront on top
but a cname to a cloudfront url
put cloudflare to source from the cloudfront url

reduced server power/size from 150 to 50/ month *(hdd was the issue, not ram/cpu)
backups go to region AZ s3 bucket as well.
added $7 to s3 costs

net, faster backups, 93 bucks in savings per month
 
I suggest going with a VPS which provides more storage.

1632841762336.webp

This is offered by KnownHost. For example if you choose their Premium VPS Server you would have 300 GB storage, that should be enough for your forum for many more years.
 
My site size has grown to 109Gb, data/attachments representing 26Gb and internal_data/attachments representing 81Gb.
Server storage available space thus becoming an issue, particularly for generating/holding backups (all stored offsite).

How have others dealt with this? Is DO Spaces or Amazon S3 via Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.x is the best way to move the files elsewhere and thus lower the server's space. I don't think that adding an extra HDD to the server is the best idea. Other options?
In my particular setup (I own multiple physical servers that I colocate), attachments are stored via Gluster. It's a shared filesystem across multiple physical servers where you have multiple copies of each file for redundancy, but it's also sharded, meaning you don't have to keep every file on every physical server (and nothing goes down for users if a server goes down for maintenance [or something unplanned]). Although that's not going to help if you are bound to a single physical server.

For backups, what I've found works really well when trying to backup stuff that doesn't change often (images, attachments, avatars, etc.) is do a diff backup where you are only backing up changes vs. everything. It can still get pricey if you are using a third-party offsite backup service, so to cut costs (and not have to rely on a third party), I setup one of these in my home for offsite backups: Synology NAS DiskStation (it's small, about a 6" cube that's tucked away in my house). It has a ton of different backup services built into it (I use rsync because it works really well for sorting out what's changed [deletions, changes and additions]), so after you do the initial backup/sync, following ones are much faster/smaller.

There's also a new service from Cloudflare (literally announced 2 hours ago) that could be a good replacement for AWS S3: https://blog.cloudflare.com/introducing-r2-object-storage/
 
I setup one of these in my home for offsite backups: Synology NAS DiskStation (it's small, about a 6" cube that's tucked away in my house). It has a ton of different backup services built into it (I use rsync because it works really well for sorting out what's changed [deletions, changes and additions]), so after you do the initial backup/sync, following ones are much faster/smaller.
Synology NAS all the way. I use it as a backup service too.
 
I host my sites on Linode and so was able to take advantage of their Block Storage product - which allows you to attach a volume of up to 10TB to your VPS and costs $0.10/GB per month.

I have just my media gallery images stored on that volume, would be easy to store attachments there too.

I currently have just over 180GB of media gallery images - I recently increased my block storage allocation to 240GB to give myself a bit of growth room - costing me $24 per month which is massively cheaper than moving to a larger server with resources I don't really need.
 
Last edited by a moderator:
Here's what i did

optimize locally with the TH plugin
moved everything to s3
enabled s3
put cloudfront on top
but a cname to a cloudfront url
put cloudflare to source from the cloudfront url

reduced server power/size from 150 to 50/ month *(hdd was the issue, not ram/cpu)
backups go to region AZ s3 bucket as well.
added $7 to s3 costs

net, faster backups, 93 bucks in savings per month
Is there a benefit of using Cloudflare in front of CloudFront? 🤔
 
My image storage issue is similar, just not as large as yours:
  1. Use object storage with my hosting provider, mounted on the server. So that I'm not having to subscribe to a virtual server with a lot of storage space.
  2. The object storage is also used for one set of backups, which is backed up using another feature from the same hosting provider.
  3. The 2nd backup cron utilizes s3cmd to copy the backups to an AWS S3 bucket daily.
I've thought about using the XF Feature that utilizes S3 or DO, but haven't pulled the trigger yet. Had a negative experience with AWS last week and am still troubled by it. Intrigued by the Cloudflare R2 option.
 
Is there a benefit of using Cloudflare in front of CloudFront? 🤔
I'm not sure there's a benefit, but it doesn't hurt. my cloudfront costs are pennies.

but i had cloudfront first, so the natural progression was to put cloudflare on top of what i had.

cloudflare will source from the closest cloudfront location if it's not cached. maybe there's a small benefit there. 🤷 Not worth changing for 7 cents a month.
 
I suggest going with a VPS which provides more storage.
You don't solve storage issues by increasing your entire VPS capabilities. Completely cost inefficient way to go about it.

Amazon S3, DO spaces or a block storage addition is the easiest and most cost effective method. I use DO block storage myself. I don't know how it's handled with spaces but I can easily make snapshots of my block storage.
 
so if we are running short of diskspace we add further SSDs.
I'm not feeling comfortable paying for adding expensive local HDD's that I'm not fully utilising (I already have 2 x SSD's with over 600Gb). Or feel that it's the best way forward. given technology options available.

S3 + Cloudflare works really well for me.
Utilising xF's native remote filesystem functionality linked in OP?

If image quality is not that important, use one of the add-on that optimize attachments.
Hopefully xenforo would add webp support in future and someone would release an add-on to convert all image attachments to webp. That would save a lot of storage space for a lot of forum owners.
Given the age and maturity of XF, and sites who have been around for many years and using forum systems prior to XF and accumulating attachments, XF should be natively supporting image optimisation functionality. This should be a core feature, not something relied upon by 3rd parties to accomodate. As sites grow, images already large, more and more will confront this issue.

moved everything to s3
[..]
reduced server power/size from 150 to 50/ month *(hdd was the issue, not ram/cpu)
Utilising xF's native remote filesystem functionality linked in OP?
What power/size system did you have before the move, and now? What guide did you use to know what size system you could downgrade to without affecting user experience and performance?

It can still get pricey if you are using a third-party offsite backup service, so to cut costs (and not have to rely on a third party), I setup one of these in my home for offsite backups: Synology NAS DiskStation (it's small, about a 6" cube that's tucked away in my house). It has a ton of different backup services built into it (I use rsync because it works really well for sorting out what's changed [deletions, changes and additions]), so after you do the initial backup/sync, following ones are much faster/smaller.
Got a DS920+ myself at home, with lots of available space. Hadn't thought of using it as a back-up target (currently using 2 seperate off-site backup targets). Home internet is only max 100Mb down though, so probably too easily flood it (guess I could schedule it for overnight un-utilised hours). Thanks for the idea.

I host my sites on Linode and so was able to take advantage of their Block Storage product
What compute device(s) are you using? I'm wondering what size system(s) would be required to support my services, without losing out user experience and performance. Linode compute and block storage looks more expensive than my current system, to match performance/size.

Use object storage with my hosting provider
Who are you using? Dynamic/growable local object/block storage from hosting provider looks the way to go, but seems quite a bit more expensive again.



Site size + DB is ~120G. Add another 120G for dev site = 240G. Add ~250G for best practice rotational backups (file + sql) means at a minimum I'm needing ~550G local storage (backups are also copied and stored elsewhere, for disaster/redundancy). Even if moving all backups offsite, I'm still needing local temp storage to generate the backups, so still ~380G minimum local.
I could also move DEV site off to a small/cheap VPS, but one with enough storage (minimum ~175G) isn't going to be cheap, and adds loss of ease when testing/upgrading new upgrades (XF and add-ons) on DEV site first because it will be on a seperate server/compute.
 
If you are worried about adding SSDs and not utilising them, then you need to be looking at S3 or other easily expandable storage solution.

Really, there is no need to store files locally on the Web server if your forum has enough content on it that requires you to think about upgrading because you are running out of said storage.

There are no guides on how to spec your VPS or container. Each forum is different and ideally you grow the VPS/container as the forum traffic demands it.
It is 2021 and upgrading a VPS just to have a larger storage space is idiotic.

Look at your current utilisation and work to that. Xenforo is well made and does not require much resources so you can get away with a small server and still be able to serve any users.


Personally I use B2 by Backblaze as it is cheap and we already use them. Helps that our Web server is located next door so our latency is 1-2ms. Does not matter much but it helps.
You can sync the bucket to an off site backup using any of the S3 compatible tools on the market.

As for dev server. Again you will waste money buying a VPS to suit storage space. You can run the forum locally if you are worried about cost. HyperV on Windows is free and there are many other VM solutions for Linux and MacOS.

TLDR; S3 is pennies compared to VPS upgrades when you are talking about a few hundred gigs of data...
 
Utilising xF's native remote filesystem functionality linked in OP?
What power/size system did you have before the move, and now? What guide did you use to know what size system you could downgrade to without affecting user experience and performance?
yes, using this addon to leverage the s3 adapter via config file and the addon.

i had a dual xeon 12-core (dual hexacore) machine and 12gb ram dedicated box. it was a few years old, so not cutting edge hardware.
i downgraded to a 2core/4gb vps on a high compute core set.

I picked the smallest option for my hard-drive needs. Being a vps, can always upgrade if i needed more ram/cpu by throwing money at it. That hasn't come yet. performance is good/better than before. loads are low.
 
XF should be natively supporting image optimisation functionality.
I do want this. At least as an opt-in feature. Conversion to WebP is very server intensive. It might not work on forums running on low end servers without maybe a very carefully programmed cron schedule. But it is something that would benefit literally everyone and save real money for owners and improve page load time for visitors. I am at least hoping for existing or new addons to bring support for this.
 
What compute device(s) are you using? I'm wondering what size system(s) would be required to support my services, without losing out user experience and performance. Linode compute and block storage looks more expensive than my current system, to match performance/size.

I use a Shared CPU plan - which is their most cost effective. I've not found CPU stealing to be much of a problem on the shared CPU, especially given I've got 4 of them available.

I'm on an 8GB Linode so I have enough space to allocate more RAM to the innodb_buffer_pool_size than I have innodb data (4G) - which costs US$40 pm + $10 pm for backups

I have 160GB of storage available on that plan - of which I allocate only half so that I can restore a disk from backup if required without removing an existing disk.

So of the 80GB storage allocated, I'm using around 50GB for the site and DB minus attachments.

I then have a 240GB block storage volume assigned wich is where my attachments folder is mounted - currently utilising 193GB. This costs $24 pm.

So total space used on disk is 243GB.

Cost is:
  • 8GB shared CPU Linode: $40pm
  • backups for 8GB Linode: $10pm
  • 240GB block storage: $24pm
  • Total: US$74pm
I don't run dev on the same server - I do my dev work on a locally hosted Hyper-V server built from the same build script I use to build my production servers. If I need a live test server that other people can access - I would fire up a cheap low powered VPS for testing purposes so it doesn't consume resources on my prod server.

For me, the most important performance enhancement you can make (other than running on SSD instead of HDD drives) is to ensure you can cache all of your innodb data in your buffer pool - so try and make your innodb_buffer_pool_size larger than your innodb data size. Make sure you allocate sufficient RAM for other caching and resources the server may require though!

RAM is the most critical component for production servers IMO.

This server has some pretty hinky customisations to my XenForo which use a lot of CPU resources - I'm still running a very heavily modified version of XF 1.5 - will be interesting to see what kind of optimisations I can get when I rebuild those customisations for XF 2.x

So I'm using a lot more CPU than I should be on this server:

1633129310905.webp


... when you compare it to my other larger site running an identical setup as listed above (minus the additional block storage - it doesn't have a large photo gallery), you'll see how high the CPU usage on that site is compared to my other site, which barely idles.

1633129376344.webp

If it weren't for the memory requirements for the DB, I would be more than happy to run this other site on a 2 shared CPU Linode or even a 1 shared CPU Linode - it simply doesn't use much CPU resource.

You certainly don't need a dedicated CPU VPS in my opinion - shared is good enough unless you have a dodgy VPS provider where you get a lot of CPU steal.

When I need to upgrade I'm more inclined to go for a high memory Linode - 24GB RAM, 2 (dedicated) CPUs, 20GB of storage + a bunch of additional block storage added, $60 pm for the VPS and probably another $30 pm for block storage and I get 3x as much RAM to play with.

Or I set up a dedicated DB server on the high memory Linode and then run a bunch of smaller compute-only Linodes for my various sites which all connect to that server. XenForo doesn't require much CPU power (my dodgy XF1.5 server above notwithstanding!), so upgrading to more powerful VPS machine just to get more RAM or storage doesn't really make much sense from a value perspective.
 
Top Bottom