Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
I wouldn't recommend it. XF expects them to be there, and there are scenarios where we'll complain if they don't exist. Internal data also contains some other stuff which isn't offloaded by default.

We also cannot guarantee that any add-ons (those which haven't followed XF standards) haven't written directly into those directories and therefore won't be offloaded.

You should, however, at minimum be able to remove data/avatars, data/attachments, data/video (in XF 2.1), data/resource_icons and data/xfmg. And internal_data/attachments, internal_data/file_check, internal_data/image_cache, internal_data/sitemaps.

What will happen if we decided to upgrade the forum to lets say 2.0.x or 2.1? will the files contain within data and internal_data folders be auto copied to the ones remotely stored?
 
Hey @Chris D, Should the config code be added prior to or after installing mod? I did config code first and then plugin. Images work fine but ffmpeg/videos get stuck in xf_mg_transcode_queue ?
 
Hello @Chris D , I've found out that this add-on conflict with XenPorta2 on "promote thread to features", that I can't upload the custom thumbnail (which is requires, we can't choose existing photo for thumbnail of featured thread in XenPorta2).

Normally, XenPorta will create a "features" folder to store all those thumb images in "data", but the add-on can't get images uploaded to AWS.

Please have check! Thank you.
 

Attachments

  • Screen Shot 2018-12-24 at 5.19.22 PM.webp
    Screen Shot 2018-12-24 at 5.19.22 PM.webp
    21.7 KB · Views: 24
Hello,

how i can integrate google bucket?


json is like this:

"type": "service_account",
"project_id": ",
"private_key_id": "",
"private_key": "-----BEGIN PRIVATE KEY---------END PRIVATE KEY-----\n",
"client_email": "",
"client_id": "",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/.gserviceaccount.com"
}
 
No idea. It's beyond the scope of this guide. It would require a Flysystem adapter to interface the abstracted file system to Google's storage buckets.
 
Is this correct?

1 step, install addon
Amazon S3 for XenForo 2.0.1

2 step, edit config php:

PHP:
<?php

$config['db']['host'] = '127.0.0.1';
$config['db']['port'] = '3306';
$config['db']['username'] = 'root';
$config['db']['password'] = 'root';
$config['db']['dbname'] = 'mt';

$config['superAdmins'] = '1';
$config['enableTfa'] = false;





$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => '1234',
         'secret' => '5678'
      ],
      'region' => 'sfo2',
      'version' => 'latest',
      'endpoint' => 'https://sfo2.digitaloceanspaces.com'
   ]);
};
$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'xftest', 'data');
};
$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://xftest.sfo2.digitaloceanspaces.com/data/' . $externalPath;
};


but not works.

Did I missed something? composer ?
 
Last edited:
I'm trying this on a new fresh XF2.0.12 and 'Amazon S3 for XenForo 2.0.1',
then with this config.php file and uploaded an avatar it still not work:

PHP:
<?php

$config['db']['host'] = 'localhost';
$config['db']['port'] = '3306';
$config['db']['username'] = 'root';
$config['db']['password'] = 'root';
$config['db']['dbname'] = 'mt';

$config['fullUnicode'] = true;


$config['fsAdapters']['data'] = function()
{
   $s3 = new \Aws\S3\S3Client([
      'credentials' => [
         'key' => '1234',
         'secret' => '4567'
      ],
      'region' => 'sfo2',
      'version' => 'latest',
      'endpoint' => 'https://sfo2.digitaloceanspaces.com'
   ]);
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3, 'xftest', 'data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://xftest.sfo2.digitaloceanspaces.com/data/' . $externalPath;
};



I'm not sure what's wrong, maybe I missed Autoloading the AWS SDK?
I copied info below to config.php and whole site down.
PHP:
\XFAws\Composer::autoloadNamespaces(\XF::app());
\XFAws\Composer::autoloadPsr4(\XF::app());
\XFAws\Composer::autoloadClassmap(\XF::app());
\XFAws\Composer::autoloadFiles(\XF::app());
 
You don’t have to do anything other than the guide asks you to do (it definitely does not tell you to add those auto load lines).

All I can tell you is that the examples you have given appear to be correct so it should just work like it does for everyone else.
 
You don’t have to do anything other than the guide asks you to do (it definitely does not tell you to add those auto load lines).

All I can tell you is that the examples you have given appear to be correct so it should just work like it does for everyone else.
thanks, I'm seeking what's wrong and testing.
 
With 2 time testings, i'm sure that XF2.1.x works well and XF2.0.x not.

Anyone facing same issue like me ?
 
Just a quick question, I'm using Amazon S3 now, and is it possible to transfer all data to another server in the future and still working well with my existing forum? Thank you!
 
Today I had tested again and confirm that XF2.1 works well, but not in XF2.0 with same code in config.php.
@Chris D please take a look ,thanks
 
Hi, new user here. The instructions were great and this worked fine, but I found one little bug in the 2.0.1 version: the ContentType logic is backwards, so everything is uploaded with type 'application/octet-stream', which results in image links being downloaded rather than displayed in the browser. This fixed it for me:
Diff:
diff -r 0a4e1078630e xenforo/src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php
--- a/xenforo/src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php    Sun Jan 06 06:03:48 2019 +0000
+++ b/xenforo/src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php    Tue Jan 08 04:46:52 2019 +0000
@@ -553,7 +553,7 @@
         $options = $this->getOptionsFromConfig($config);
         $acl = isset($options['ACL']) ? $options['ACL'] : 'private';
 
-        if ( ! isset($options['ContentType']) && is_string($body)) {
+        if ( ! isset($options['ContentType']) && !is_string($body)) {
             $options['ContentType'] = Util::guessMimeType($path, $body);
         }

Thanks!
 
I'd suggest using rclone instead of s3cmd if you're wanting to upload a lot of data. Rclone supports parallel transfers which makes a monumental difference in speed, it's also more actively developed and has quite a few more features than s3cmd.
 
Testing this out on XF2.1 RC1 and it throws a 500 error when adding the config. Values exactly the same in config.php as they are in XF2.0 (and using the 2.1 version of the addon).
 
@Chris D - your XF 2.1.0 download is missing Listener.php and Composer.php. I've copied them from the XF2.0.1 version, and it works now.
 
Top Bottom