Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
I wouldn't recommend it. XF expects them to be there, and there are scenarios where we'll complain if they don't exist. Internal data also contains some other stuff which isn't offloaded by default.

We also cannot guarantee that any add-ons (those which haven't followed XF standards) haven't written directly into those directories and therefore won't be offloaded.

You should, however, at minimum be able to remove data/avatars, data/attachments, data/video (in XF 2.1), data/resource_icons and data/xfmg. And internal_data/attachments, internal_data/file_check, internal_data/image_cache, internal_data/sitemaps.
 
Using spaces.

Attachments work fine. Existing ones and newly uploaded ones.

But only new uploaded avatars work. Existing avatars and media thumbnails don't work. (I have moved them successfully)

Getting 403 access denied error.
 
I suspect the bit you missed is this bit:

The files that go into the data folder in Spaces need to be publicly viewable, so when you move the files over, you need to set --acl-public.

I think you just need to copy the original files over again with the ACL set to public and that should sort it.
 
it will still be you’re URL. But we stream this from the remote location.
Does that mean the avatar's/attachment's canonical URL would be the same as the previous URL? And this whole AWS storage thing is completely transparent to the guests?

In that case, I assume each time a guest refresh the page, it downloads to the main server before it downloads to the guest. Am I correct?


Is it possible to get an URL like https://images.xenforo.com/community/data/avatars/m/130/130376.jpg?1541548951 by adding an A record to that AWS server? so we can use Cloudflare on that subdomain and it will take care of image caching. (And also it would save the bandwidth as well)
 
Last edited:
Does that mean the avatar's/attachment's canonical URL would be the same as the previous URL? And this whole AWS storage thing is completely transparent to the guests?
Yes for internal_data (i.e. the full size attachments).

In that case, I assume each time a guest refresh the page, it downloads to the main server before it downloads to the guest. Am I correct?
Yes, but also there would still be some level of browser caching, and maybe even CloudFlare caching.

Is it possible to get an URL like https://images.xenforo.com/community/data/avatars/m/130/130376.jpg?1541548951 by adding an A record to that AWS server? so we can use Cloudflare on that subdomain and it will take care of image caching.
That's a public data URL, that would already be coming directly from the S3/Spaces bucket and make use of any of their own caching mechanisms.

For internal_data stuff the attachments will still come from your e.g. [URL]https://xenforo.com/community/attachments/some-file.123[/URL] URL still as before. We do this because of permissions, i.e. if you share that URL you can only see the attachment if you have permission to view the content.

We're considering some options in the future to change this, though the cost of that will be that attachments in private forums could theoretically be shared with users who cannot usually access them.
 
You should, however, at minimum be able to remove data/avatars, data/attachments, data/video (in XF 2.1), data/resource_icons and data/xfmg. And internal_data/attachments, internal_data/file_check, internal_data/image_cache, internal_data/sitemaps.

So we should keep internal_data/xfmg ?
 
I am getting this error when trying to save a file after installing addon and putting code into config.pgp

InvalidArgumentException: Endpoints must be full URIs and include a scheme and host in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/ClientResolver.php at line 595
  1. Aws\ClientResolver::_apply_endpoint() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/ClientResolver.php at line 288
  2. Aws\ClientResolver->resolve() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/AwsClient.php at line161
  3. Aws\AwsClient->__construct() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/S3/S3Client.php at line 263
  4. Aws\S3\S3Client->__construct() in src/config.php at line 16
  5. XF\App->{closure}()
  6. call_user_func() in src/XF/FsMounts.php at line 17
  7. XF\FsMounts::loadDefaultMounts() in src/XF/App.php at line 858
  8. XF\App->XF\{closure}() in src/XF/Container.php at line 28
  9. XF\Container->offsetGet() in src/XF/App.php at line 2154
  10. XF\App->fs() in src/XF/Util/File.php at line 101
  11. XF\Util\File::copyFileToAbstractedPath() in src/XF/Service/User/Avatar.php at line 269
  12. XF\Service\User\Avatar->updateAvatar() in src/XF/Pub/Controller/Account.php at line 456
  13. XF\Pub\Controller\Account->actionAvatar() in src/XF/Mvc/Dispatcher.php at line 249
  14. XF\Mvc\Dispatcher->dispatchClass() in src/XF/Mvc/Dispatcher.php at line 88
  15. XF\Mvc\Dispatcher->dispatchLoop() in src/XF/Mvc/Dispatcher.php at line 41
  16. XF\Mvc\Dispatcher->run() in src/XF/App.php at line 1931
  17. XF\App->run() in src/XF.php at line 329
  18. XF::runApp() in index.php at line 13
 
Based on the config file you sent to me before the two endpoint values should be:

https://nyc3.digitaloceanspaces.com/

Rather than:

https://xxx.nyc3.digitaloceanspaces.com/

You only need the xxx. bit on the externalDataUrl

Chris, Still having some issues.

Do you see anything wrong in my current config below?

<?php

$config['db']['host'] = 'localhost';
$config['db']['port'] = '3306';
$config['db']['username'] = '###';
$config['db']['password'] = '###';
$config['db']['dbname'] = '###';

$config['fullUnicode'] = true;
$config['enableTfa'] = false;

$config['fsAdapters']['data'] = function()
{
$s3 = new \Aws\S3\S3Client([
'credentials' => [
'key' => ' ########## ',
'secret' => ' ########### '
],
'region' => 'nyc3',
'version' => 'latest',
'endpoint' => ' https://nyc3.digitaloceanspaces.com '
]);
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3, 'NAME', 'data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
return ' https://NAME.nyc3.digitaloceanspaces.com/data/' . $externalPath;
};

$config['fsAdapters']['internal-data'] = function()
{
$s3 = new \Aws\S3\S3Client([
'credentials' => [
'key' => '###########',
'secret' => '#########'
],
'region' => 'nyc3',
'version' => 'latest',
'endpoint' => 'https://nyc3.digitaloceanspaces.com'
]);
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3, 'NAME', 'internal_data');
};

Still getting Oops

Oops! We ran into some problems.
InvalidArgumentException: Endpoints must be full URIs and include a scheme and host in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/ClientResolver.php at line 595

  1. Aws\ClientResolver::_apply_endpoint() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/ClientResolver.php at line 288
  2. Aws\ClientResolver->resolve() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/AwsClient.php at line 161
  3. Aws\AwsClient->__construct() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/S3/S3Client.php at line 263
  4. Aws\S3\S3Client->__construct() in src/config.php at line 15
  5. XF\App->{closure}()
  6. call_user_func() in src/XF/FsMounts.php at line 17
  7. XF\FsMounts::loadDefaultMounts() in src/XF/App.php at line 858
  8. XF\App->XF\{closure}() in src/XF/Container.php at line 28
  9. XF\Container->offsetGet() in src/XF/App.php at line 2154
  10. XF\App->fs() in src/XF/Util/File.php at line 101
  11. XF\Util\File::copyFileToAbstractedPath() in src/XF/Service/User/Avatar.php at line 269
  12. XF\Service\User\Avatar->updateAvatar() in src/XF/Pub/Controller/Account.php at line 456
  13. XF\Pub\Controller\Account->actionAvatar() in src/XF/Mvc/Dispatcher.php at line 249
  14. XF\Mvc\Dispatcher->dispatchClass() in src/XF/Mvc/Dispatcher.php at line 88
  15. XF\Mvc\Dispatcher->dispatchLoop() in src/XF/Mvc/Dispatcher.php at line 41
  16. XF\Mvc\Dispatcher->run() in src/XF/App.php at line 1931
  17. XF\App->run() in src/XF.php at line 329
  18. XF::runApp() in index.php at line 13
 
DigitalOcean is a neat service, I used to power a Phantasy Star Online server through them, but the way they charge per usage rather than a set monthly fee made it very difficult to keep paying to keep my beloved server up and running with them.
 
Top Bottom