Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
Tested this out with DigitalOcean and it works great! The only point of feedback would be to add support for DigitalOcean's Spaces CDN endpoint. CDN comes for free when you use DigitalOcean Spaces, but when you try to use it with this system it gives a 403 error.

I had this issue as well but I think it just takes a while for the CDN to propagate all the file permissions or something. I used the normal endpoint and then after a few days I tried switching it to the CDN endpoint (a custom domain) again and it worked.

Make sure you used the public ACL option when you used S3cmd to copy your files.
 
aws n00b...

everything seems to be working as far as i can see from the xf side but i'm getting the 403 on the write to s3. I'm struggling with applying the policy to the bucket. I've made the bucket and the IAM but i can't figure out the correct way to attach it VS the public/private bucket selectors.

Is there a quick ref guide to help guide me? I ran the instructions 4 times and i'm still broken so i'm not getting it... I realize this is a bit outside of supported realm here...
 
Last edited:
Hello @Chris D,

Can you help backblaze configuration?

 
so can anyone confirm if there is any alternative to keeping bucket public? this query seems to remain unresolved at the moment.
 
thanks. another question i had about this integration. is it possible to restrict s3 to certain subfolders like internal_data/attachments and data/attachments. with data folder placed on s3, the user profile images are fetched directly through the s3 url which basically leaks the location of the bucket. attachments on the other hand are fetched using xenforo url. this means that user profile images totally bypass any caching integration like cloudflare. and they would break down if there are any outage at amazon.
 
I'm using Amazon S3 for file storage, but is there any way to move back all those files from S3 to my server? Thank you!
 
aws n00b...

everything seems to be working as far as i can see from the xf side but i'm getting the 403 on the write to s3. I'm struggling with applying the policy to the bucket. I've made the bucket and the IAM but i can't figure out the correct way to attach it VS the public/private bucket selectors.

Is there a quick ref guide to help guide me? I ran the instructions 4 times and i'm still broken so i'm not getting it... I realize this is a bit outside of supported realm here...
This permissions setting work for me, it will change the access status to "Objects can be public" (The bucket is not public but anyone with appropriate permissions can grant public access to objects.)

View attachment 219840
Thank you! This was my issue as well.

I got everything offloaded now. 16gb! excited to have 5 min backups again instead of 10 hours.

s3cmd was fun to setup on centOS. had to get python, pip, a few packages like dateutil, and finally it worked.
 
I deleted it... looked old, no issues. probably a relic from a previous install.


Now, onto serving the images... they are coming in uncompressed and causing some headaches on the speedtest reports.

Anyone integrate cloudfront in front of their bucket to enable ondemand compression?
 
Unfortunately I don't know. I'm not everso familiar with S3 generally and I only just learnt that S3 Glacier existed when you posted about it 🙂
 
I used the normal endpoint and then after a few days I tried switching it to the CDN endpoint (a custom domain) again and it worked.

whoa. how many days are we talking about here! i have only been able to use the cdn endpoint for the data folder. nothing else works here 'endpoint' => 'https://xxxx.digitaloceanspaces.com'!
 
Hey @Chris D ,

It's my first time messing with S3 in any fashion and I was wondering if the configuration for S3 would be identical to S3 Glacier.
even if it's the same config, using glacier is probably a mistake. Any hit to your page requesting an image will take a LONG time (hours maybe?) to deliver the image, and it costs more to pull out
 
Last edited:
right. last i checked, glacier was for archival storage. data that you do not need on demand. any data stored on glacier needs to be requested to be fetched. this could take hours.
 
I read it like this initially too guys but I don’t think that’s what @ManagerJosh means.

I think he is saying that he already uses S3 Glacier for the proper archival storage purposes and may be somewhat familiar with it and its configuration and was inquiring as to whether standard S3 for the purpose of serving attachments was similar.
 
I read it like this initially too guys but I don’t think that’s what @ManagerJosh means.

I think he is saying that he already uses S3 Glacier for the proper archival storage purposes and may be somewhat familiar with it and its configuration and was inquiring as to whether standard S3 for the purpose of serving attachments was similar.

What @Chris D wrote. I know first hand that S3 glacier is more so for archival stuff, but the website I'm running SimsWorkshop.net has around 4k+ in custom content for The Sims 4. I'm starting the next phase/step and planning where maybe the older stuff can be "archived" rather than constantly be made available, but again, it's all about the planning and exploratory aspects of the upper limits of XF. I don't foresee needing it in the next 6 months, but that may all change in 2 years times. Who knows.

Either way, I'm taking the time to think about these things before pressing "the button"
 
Thinking out loud here...

use this addon product.
cloudfront sources from s3
s3 is configured with aws lifecycle rules to move to glacier after x days https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
create a cron to move the threads that have said attachments (maybe use tags?) to a non-public forum as the attachment will break.

s3 is .023 and Glacier is .004 cents per GB, or roughly 5x cheaper.
honestly, the cost savings of even 4tb is hardly worth the extra effort. you'd have to have massive data to make it worth moving threads and breaking them for any random bump in my opinon.
 
Top Bottom