Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
But what is the purpose of this addon if we cannot delete our attachments files?

The only reason to use services like Amazon S3 is because it is much cheaper and we don't have the same capacity on our hosted servers. But if I cannot free disk space on my server, why use this at all?

I am just asking genuinely. I am interested in this setup for a long time but it can cause a lot of problems for forums who depend on attachments and I am not confident enough because of that.
You can delete internal_data/attachments, just don't delete internal_data because as Chris said, it is used for other purposes where an internal, local directory is required.
 
Ahh, I see. The whole directory should not be deleted, but the children directories can be deleted. Now I understand, thanks.
 
Ahh, I see. The whole directory should not be deleted, but the children directories can be deleted. Now I understand, thanks.
And as always, proceed with caution.... make sure your backups are in order, move or rename the directory first and see if everything looks good in your forum, and then after you're certain that it won't break your site, delete it .
 
I have it working now on one of my smaller sites. Just did it last night. At first I was getting permission errors. Once I made the bucket public, it worked.
he only issue I have is that the old files are not showing up on my forum ... but everything works perfect
 
you might have to change permissions of all the files to public iirc. from what i remember, i ran a s3cmd command to do that. but i think i had to do this because i did not make them public while copying them from my hosting server to s3.
 
I have a full Backup. So I will proceed carefully and remove attachments. That may be the biggest chunk. Thanks.

PS: I used the aws command line tool, also very easy to handle.
 
yup. i had to make the bucket public as well to make it work.
@Chris D, you may want to update your guide; the instructions say to "choose all defaults until you get it created" or to that effect; in reality folks should deselect the "block all" option and bypass the AWS warning that such a setting is not recommended.

EDIT: maybe this is not how you should do it; perhaps only specific folders are meant to be public.
 
Last edited:
Struggling with configuring this with AWS S3 & cloudflare. When I configure everything and try to update my avatar, nothing happens (no errors) and the S3 bucket remains empty. Notes:
1) Followed the guide, but DEselected "block all" when creating the bucket.
2) matched the bucketname to my desired same-domain destination (i.e.., the name of the bucket is xxx.domain.com)
3) added the CNAME to cloudflare, with the NAME being the xxx per above, and the TARGET being: xxx.domain.com.s3.<region>.amazonaws.com

config:

Code:
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'ABC',
         'secret' => 'xyz'
      ],
      'region' => '<region>',
      'version' => 'latest',
      'endpoint' => 'https://s3.<region>.amazonaws.com'
   ]);
};

$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'xxx.domain.com', 'data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://xxx.domain.com/data/' . $externalPath;
};
any thoughts?
 
Last edited:
I'm stuck there actually. I can get the new content to upload and serve from S3 but I'm struggling with best method to transport the old data.
 
I'm stuck there actually. I can get the new content to upload and serve from S3 but I'm struggling with best method to transport the old data.
Copy with the aws commandline Tool.

Install the aws cli:

Configure it:

Copy the Data. eg.

aws s3 cp internal-data s3://{yourbucketname}/internal-data/ --recursive

Was very easy.
 
Was very easy.
awesome, thanks -- will work it. Question: can you use Strict SSL or do you have to use Full? If Strict, what page rules can assist with the config?

Also - do you need to add any ACL permissions to the copy CLI command?
 
Where do you want to use strict SSL? Between EC2 and S3? I did not configured something extra there, execpt the routes inside the AWS.
 
i get a 526 error if i try to hit my CNAME entry from my browser when in strict mode Cloudflare setting, but not when set to full. Perhaps I shouldn't concern myself with that; it was just a test.
 
i get a 526 error if i try to hit my CNAME entry from my browser when in strict mode Cloudflare setting, but not when set to full. Perhaps I shouldn't concern myself with that; it was just a test.
your best bet is a digital ocean ... just finished moving my forum there.. and it was so easier.
 
Other options... B2 if your storage requirement is less than 10GB. Wasabi if you need close to 1TB. Stack Path if you want to pay per use for storage with no extra charges.
 
Top Bottom