Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1 & XF 2.2

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1 & XF 2.2

No permission to download

Chris D

XenForo developer
Staff member
Chris D submitted a new resource:

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.x - The same concepts can be applied to other adapters too.

Why this guide?

Since XenForo 2.0.0 we have supported remote file storage using an abstracted file system named Flysystem. It's called an abstracted file system as it adds a layer of abstraction between the code and a file system. It means that it provides a consistent API for performing file system operations so that whether the file system is a local disk-based file system or a distributed and remotely accessible...

Read more about this resource...
 
{Kevin looks at the invoice he just pre-paid for a year of hosting on a shiny new beefy VPS, compares that to the price of his old VPS plus how much Digital Ocean costs... sighs and then bangs his head on his desk. Repeatedly.}🤣

Chris, thanks for the detailed write-up. 😎 With the files being physically offloaded to a different server are there any performance concerns at the time of upload for really large files? Any recommendations for tweaks to do if somebody is using this type of setup?
 
Hi Chris

We have ~20GB files in our XFRM, so can I use this to move RM storage to S3 as it is much cheaper for storage?
The download speed will not be effected if I put the files in different location?
 
Hi Chris

We have ~20GB files in our XFRM, so can I use this to move RM storage to S3 as it is much cheaper for storage?
The download speed will not be effected if I put the files in different location?
Yep you can move all of that over. Obviously with such a large amount of files it would be important to test the migration fully first. Make local backups of the files and be prepared to restore any backups if something goes wrong (but it shouldn’t do if you test properly 😉)

Hi Chris D,
Its not working on my website, its uploaded avatar, thumbnail but not full attachment. Attachment still uploaded to my hosting. Can you help please? Demo I tried to https://thich.com/f/nha-dat.94/
Feel free to share the code you added in case there are any mistakes there (remove any of your own keys from the code though).

{Kevin looks at the invoice he just pre-paid for a year of hosting on a shiny new beefy VPS, compares that to the price of his old VPS plus how much Digital Ocean costs... sighs and then bangs his head on his desk. Repeatedly.}🤣

Chris, thanks for the detailed write-up. 😎 With the files being physically offloaded to a different server are there any performance concerns at the time of upload for really large files? Any recommendations for tweaks to do if somebody is using this type of setup?
Speed could definitely be a factor but in most cases it shouldn’t be problematic. No specific recommendations. Just the usual concerns in terms of PHP upload limits and large files.
 
Hi Chris D,
Its not working on my website, its uploaded avatar, thumbnail but not full attachment. Attachment still uploaded to my hosting. Can you help please? Demo I tried to https://thich.com/f/nha-dat.94/
Thanks for sending me your config file.

That all looks set up correctly so I actually suspect that it is actually working.

Note that the attachment URL for your attachments will remain the same - it will still be you’re URL. But we stream this from the remote location.

If we didn’t do this then we wouldn’t be able to still handle permissions.
 
Yep you can move all of that over. Obviously with such a large amount of files it would be important to test the migration fully first. Make local backups of the files and be prepared to restore any backups if something goes wrong (but it shouldn’t do if you test properly 😉)

Thanks Chris
Regarding S3cmd, it will copy the files from my server to S3 or it will move it so all the files in my server will be deleted?
 
I'm not totally familiar with it personally. I would say that support with such things is really beyond the scope, but it was obviously going to be something that got brought up.

I suspect it will either copy by default or there will be options to control its behaviour.
 
Regarding S3cmd, it will copy the files from my server to S3 or it will move it so all the files in my server will be deleted?
By default, it will copy them leaving the existing version on your server.

You can add an extra flag to the sync command to delete the local files after the file has been sync'd to the remote location --delete-after
 
Haven't tested it but this is actually really great. Have to read through it but if we can easily offload our attachments (and get back them easily), this is imo much better than all new 2.1 HYS stuff, as this can save each of us tons of money. Thank you very much.
 
Not sure if I've got something wrong with the settings?

188092

If you paste in an image, you get the above error, as the hash of the file can't be found.

When uploading an image into the post using the uploader:

188094

However, the file is on S3 (thumbnail does work, as I can edit the post and it's there for insertion)

188095

internal_data/attachment is there but it's not loading the "new" image (the previously manually uploaded ones sent over with S3CMD are working fine)

188096
 
OK, so if I download those files back to the internal_data directory on my server, they work, so XF isn't fetching them from the new S3 storage.

PHP:
$config['fsAdapters']['data'] = function()
{
   $s3 = new \Aws\S3\S3Client([
      'credentials' => [
         'key' => '1234',
         'secret' => '5678'
      ],
      'region' => 'ams3',
      'version' => 'latest',
      'endpoint' => 'https://ams3.digitaloceanspaces.com'
   ]);
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3, 'NAME', 'data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://NAME.ams3.digitaloceanspaces.com/data/' . $externalPath;
};

$config['fsAdapters']['internal-data'] = function()
{
   $s3 = new \Aws\S3\S3Client([
      'credentials' => [
         'key' => '1234',
         'secret' => '56784'
      ],
      'region' => 'ams3',
      'version' => 'latest',
      'endpoint' => 'https://ams3.digitaloceanspaces.com'
   ]);
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3, 'NAME', 'internal_data');
};
 
I can't see anything wrong there and in my testing here it is working fine - almost identical configuration too, of course.

The attachments make their way to DO without issue:
188097

I just had a potter around on your site, you still have the above config set up, right? I just uploaded an image to an empty conversation and it seems to have worked fine.
 
I can't see anything wrong there and in my testing here it is working fine - almost identical configuration too, of course.

The attachments make their way to DO without issue:
View attachment 188097

I just had a potter around on your site, you still have the above config set up, right? I just uploaded an image to an empty conversation and it seems to have worked fine.
No, I disabled the internal_data part of the code in the config.
 
I believe it may be the X-Accel-Redirect support (as it needs nginx todo a redirect to a URL), I'll run through the tutorial and see if I can figure out why it is breaking.
 
Thanks for sending me your config file.

That all looks set up correctly so I actually suspect that it is actually working.

Note that the attachment URL for your attachments will remain the same - it will still be you’re URL. But we stream this from the remote location.

If we didn’t do this then we wouldn’t be able to still handle permissions.
Thanks for rely
Is anyway that I can make all upload files to public by default. Because I like to using subdomain and if I turn on Proxy images option, the image cant show. Thank you
 
Once I enable support for this configuration, can I safely delete the data/ and internal_data/ directories?
 
Back
Top Bottom