Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
Its permissions must have changed though so it seems unrelated to any of the changes required by this stuff.

You still need to ensure that data and internal_data and all of its contents are fully writable.
 
@Chris D in case I want to use a S3 compatible cloud storage (so not AWS S3) the only thing I have to do it change the endpoint option?

example from:
'endpoint' => 'https://s3.eu-west-2.amazonaws.com'
to
'endpoint' => 'https://compatibleservice-endpont-url.com'

or https://s3.amazonaws.com is hardcoded somewhere?
 
Well this guide was written because there is an S3 compatible service already, DigitalOcean Spaces, and this guide demonstrates that a custom endpoint is being passed in.

So yes, if there’s another truly S3 compatible service then you should be able to use this.
 
Nice guide, though I do have 2 questions:
  1. Isnt't it a security issue to use the same, publically accessible bucket for both data and internal_data?
    As far as I understand S3 anyone could just guess & probe URLs on S3 directly to access attachments, even without appropriate permission.
    This could be a seignificant problem, especially if some content is normally only accessible via payment (user upgrades, paid resources, etc.)
  2. Doesn't this significantly increase bandwidth usage (and latency) as all attachments viewed by visitors must be downloaded from S3 by the server just in time?
 
Last edited:
It should be fairly trivial for internal_data and external_data to be under different buckets. For XF attachments, the file hash is actually built into the filename/path so you can't easily guess it.

For payment-style resources you will want to proxy resources using nginx's X-Accel-Redirect or apache's X-Sendfile so php retains full access control but the webserver does the content proxying. Bandwidth & latency to S3 can be migrated by add-on a HTTP proxy layer between the redirect and the proxy request.

My Attachment Improvements add-on has most of the parts except for the proxy-caching setup (and only support nginx redirect stuff). I may write-up a few tutorial sometime next month..
 
@Chris D - unable to upgrade to the latest XF with this setup:

Code:
Fatal error: Uncaught Error: Class 'Aws\S3\S3Client' not found in /home/nginx/domains/mattwservices.co.uk/public/src/config.php:29 Stack trace: #0 [internal function]: XF\App->{closure}() #1 /home/nginx/domains/mattwservices.co.uk/public/src/XF/FsMounts.php(17): call_user_func(Object(Closure)) #2 /home/nginx/domains/mattwservices.co.uk/public/src/XF/App.php(858): XF\FsMounts::loadDefaultMounts(Array) #3 /home/nginx/domains/mattwservices.co.uk/public/src/XF/Container.php(28): XF\App->XF\{closure}(Object(XF\Container)) #4 /home/nginx/domains/mattwservices.co.uk/public/src/XF/App.php(2154): XF\Container->offsetGet('fs') #5 /home/nginx/domains/mattwservices.co.uk/public/src/XF.php(543): XF\App->fs() #6 /home/nginx/domains/mattwservices.co.uk/public/src/XF/Util/File.php(488): XF::fs() #7 /home/nginx/domains/mattwservices.co.uk/public/src/XF/Error.php(196): XF\Util\File::installLockExists() #8 /home/nginx/domains/mattwservices.co.uk/public/src/XF/App.php(1947): XF\Error->displayFatalExceptionMessage(Object(Error)) #9 /home/ngin in /home/nginx/domains/mattwservices.co.uk/public/src/config.php on line 29
 
It's not totally clear when this is happening. Can you clarify? Also by latest XF do you mean XF 2.1 Beta 2 or XF 2.0.11?
 
That's a decent workaround for now. It's because we're programmatically autoloading the files in an add-on so it's kind of expected. But there might be a way around that. I'll report back.
 
Chris D updated Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.x with a new update entry:

Setup instructions changed, plus XF 2.1 compatibility.

This update now includes two different add-ons. One is compatible with XF 2.0 and the other is compatible with XF 2.1.

The instructions have also been changed to avoid errors when accessing the install system in XF 2.0.

This change also introduces a slightly changed approach to avoid the repeated code when setting up the AWS SDK client.

The XF 2.1 version of the add-on also does not require the autoloader stuff to be applied as the add-on does that automatically. However, note that...

Read the rest of this update entry...
 
@Chris D I'm getting an error when visiting a wordpress page utilizing the ThemeHouse Xpress addon. It only happens when browsing a wordpress page everything xenforo works great.

I submitted a support ticket at ThemeHouse and @Lukas W. replied with the following.

Hey there,

that’s an error in the underlying library. As the stack trace indicates, XPress hasn’t been called anywhere at that point. My assumption would be that it’s caused by the fact that the code is run outside the native XenForo directory, or similar. Not really anything we could do here I guess, as it’s not our code.

And here is the error I get in my nginx logs...

Code:
PHP message: PHP Fatal error: Uncaught TypeError: Argument 1 passed to Aws\Handler\GuzzleV6\GuzzleHandler::Aws\Handler\GuzzleV6{closure}() must be an instance of Exception, instance of TypeError given, called in /var/www/html/community/src/addons/XFAws/_vendor/guzzlehttp/promises/src/Promise.php on line 203 and defined in /var/www/html/community/src/addons/XFAws/_vendor/aws/aws-sdk-php/src/Handler/GuzzleV6/GuzzleHandler.php:45
Stack trace:
#0 /var/www/html/community/src/addons/XFAws/_vendor/guzzlehttp/promises/src/Promise.php(203): Aws\Handler\GuzzleV6\GuzzleHandler::Aws\Handler\GuzzleV6{closure}(Object(TypeError))
#1 /var/www/html/community/src/addons/XFAws/_vendor/guzzlehttp/promises/src/Promise.php(156): GuzzleHttp\Promise\Promise::callHandler(2, Object(TypeError), Array)
#2 /var/www/html/community/src/addons/XFAws/_vendor/guzzlehttp/promises/src/TaskQueue.php(47): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise{closure}()
#3 /var/

Do you have any idea what I need to do to fix this?
 
Isnt't it a security issue to use the same, publically accessible bucket for both data and internal_data?
Maybe I missed it, but where did you see in the instructions to make the bucket pubic? I didn't see anything mention that.
 
Maybe I missed it, but where did you see in the instructions to make the bucket pubic? I didn't see anything mention that.

Note: When copying your existing data files across, they will need to be made public. You can do this by setting the ACL to public while copying:

But, you can make specific directories/files within the bucket public, doesn't have to be applied to the entire thing
 
I just came across this thread. Very interesting, since storage space was one of the reason I have disabled attachments (for everyone but staff) on all of the forums I work with.

I use DigitalOcean for many of my projects and just investigated what Spaces offers, and have a question. Spaces offers us the opportunity (at no charge) to use the Spaces CDN. Am I guessing that since the attachments are processed back through XF on our site, that we would not be able to take advantage of the CDN?

We were using the free Cloudflare level several years ago, but had too many problems with it. However, in the past couple of years I notice their free tier has gotten much better and I am thinking of trying it again on a few sites. I am guessing that CloudFlare would probably cache those for us instead.

For $5, I may just add one onto my account to take it for a spin...
 
How is the upgrade carried on once you move internal_data and data folders to DigitalOcean.
Is it safe to completely delete those two folders after moving them to DigitalOcean? Or delete the attachment folder within/delete its contents?
 
It would be good if there was a setting so it didn't stream the images through the server if the image was public. Couldn't it check this before the URL is output? My idea of offloading to S3 isn't for space purposes (although it's obviously a large part of it), it's more reducing the amount of requests to the server. Great stuff though, glad to see this in the core.
 
Hi Chris, does this still make use of the maximum upload size set by the server as the files are now offloaded?
 
Top Bottom