Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
Those files were updated in going from 3.209.16 => 3.209.18. You can see the change log at https://github.com/aws/aws-sdk-php/blob/master/CHANGELOG.md. Normally I would say that changes wouldn't affect XF, as XF wouldn't use more than 1% of the SDK, but these releases have "Improved error handling for failed writes and appends on unclosed streams" on S3, so if anything, will make it ever so slightly more robust.
As for your initial errors. Maybe a caching issue on your end? You use opcache? Its possible that a common library was cached that defined the boolean_value() function, however as the old sdk was more than 3 years old, that function may not have existed previously (fips didnt). Did you restart php (e.g. php-fpm) after the first update?

I do you memcache so that might be an issue, I hadn't thought of that. I hadn't updated the AWS SDK on the system for a while and not too familiar with it. Thanks for the details and suggestion! I'll try another re-install during a slow traffic time and see if reloading memcache is needed for the upgrade.
 
It's an unofficial official add-on...

If it's downloaded from the resource manager, you won't get update notifications. Watch the resource for updates like you would for any other.
 
I wish there was an "Installed" button in Resources and the watched threads/resources had an [Installed] prefix or a checkmark. I watch a bunch of stuff there.
 
Last edited:
Those files were updated in going from 3.209.16 => 3.209.18. You can see the change log at https://github.com/aws/aws-sdk-php/blob/master/CHANGELOG.md. Normally I would say that changes wouldn't affect XF, as XF wouldn't use more than 1% of the SDK, but these releases have "Improved error handling for failed writes and appends on unclosed streams" on S3, so if anything, will make it ever so slightly more robust.
As for your initial errors. Maybe a caching issue on your end? You use opcache? Its possible that a common library was cached that defined the boolean_value() function, however as the old sdk was more than 3 years old, that function may not have existed previously (fips didnt). Did you restart php (e.g. php-fpm) after the first update?

I do you memcache so that might be an issue, I hadn't thought of that. I hadn't updated the AWS SDK on the system for a while and not too familiar with it. Thanks for the details and suggestion! I'll try another re-install during a slow traffic time and see if reloading memcache is needed for the upgrade.

I tried re-installing and restarting memcached but still same issue. Perhaps related to updating aws-cli on the system recently? I hadn't done that for maybe a year or two.

# aws --version
aws-cli/2.4.16 Python/3.8.8 Linux/4.18.0-348.12.2.el8_5.x86_64 exe/x86_64.rocky.8 prompt/off
 
Last edited:
I tried re-installing and restarting memcached but still same issue. Perhaps related to updating aws-cli on the system recently? I hadn't done that for maybe a year or two.

# aws --version
aws-cli/2.4.16 Python/3.8.8 Linux/4.18.0-348.12.2.el8_5.x86_64 exe/x86_64.rocky.8 prompt/off
I Was referring to opcache - not memcached, so it would php-fpm you restart or httpd if you are using apache plus php module. aws-cli wouldn't have any impact here - that is written in python and is completely independent. Not actually sure which issue you are referring to. If it was "XenForo health check currently complains that 17 files have 'unexpected contents'" - that is because you used composer to update the aws-php-sdk library to one version higher than the version in the add-on. Either revert the updated or just ignore that error, it really wont have any impact.

If you are referring to the other error that was the stack trace. If that is still happening, its possible that you have an older version of that library installed somewhere else on the server and it is getting picked up through a PATH statement.
 
I Was referring to opcache - not memcached, so it would php-fpm you restart or httpd if you are using apache plus php module. aws-cli wouldn't have any impact here - that is written in python and is completely independent. Not actually sure which issue you are referring to. If it was "XenForo health check currently complains that 17 files have 'unexpected contents'" - that is because you used composer to update the aws-php-sdk library to one version higher than the version in the add-on. Either revert the updated or just ignore that error, it really wont have any impact.

If you are referring to the other error that was the stack trace. If that is still happening, its possible that you have an older version of that library installed somewhere else on the server and it is getting picked up through a PATH statement.

Gotcha, I had just updated the cli so thought that may have had something to do with it. Checking opcache, I'm running FastCGI due to some compatibility with older scripts so it looks like it doesn't need to be restarted (edit: disabled opcache and still getting same results). I did try restarting httpd and also reinstalling a couple times but getting the same result.

I was getting the previous errors on every page request which got a bit overwhelming, so I prefer the clean error log and seeing the '17 files have unexpected contents' notice on the admin page. Thanks for the help!
 
Last edited:
What is the relationship with this add-on and the new XF Cloud service? If it's already set up for s3 storage, does the content get migrated to xf cloud, and if so, what is that process like?
 
The typical Cloud plans don't utilise remote object storage. Files are locally stored (on high performance SSDs).
 
More to the point if people are signing up to a SaaS service like XF Cloud, then IaaS components like this are only for the providers to be worried about.
FWIW - the instructions on the add-in seem a bit out-dated now and the perms aren't really quite right. Tempted to make up a cloud-formation template to make it super simple
 
Do we need to set internal_data also to public? That doesn't seem right...

For the data folder I have done
Bash:
s3cmd put * s3://mybucket/data/ --recursive --acl-public
And I can see the avatars are working now.
But for internal data I did
Bash:
s3cmd put * s3://mybucket/internal_data/ --recursive
And the internal_data is not working. Attachments are not loading.

I'm using an Amazon S3 bucket and set the object ownership to ACLs enabled and Bucket owner preferred.
 
Internal data shouldn't be public. I have a suspicion that you have got a permissions issue. As you uploaded all the data manually, you'd expect avatars would work, as that only requires public reading.
If using an IAM user, check the creds. From config.php you'll have a line a bit like
Code:
'credentials' => ['key' => 'AKIAIEMDJCEXXXXXXXX', 'secret' => 'Sq6ZX9c0a323jsQDgo+tXiidcnshdg5hfasutysd']

try adding a file, then reading it using the permission from above. e.g.
Code:
echo hi > test.file
s3cmd --access-key AKIAIEMDJCEXXXXXXXX --secret-key "Sq6ZX9c0a323jsQDgo+tXiidcnshdg5hfasutysd" put test.file s3://mybucket/internal_data/
s3cmd --access-key AKIAIEMDJCEXXXXXXXX --secret-key "Sq6ZX9c0a323jsQDgo+tXiidcnshdg5hfasutysd" get s3://mybucket/internal_data/test.file .
if you get an error, your creds are wrong
 
Thanks Jim Boy, that's actually very helpful.
I seem to have a problem wit get requests

put works perfectly and the file is in my bucket, but with get I get an error
Code:
s3cmd --access_key "AKIAIEMDJCEXXXXXXXX" --secret_key "Sq6ZX9c0a323jsQDgo+tXiidcnshdg5hfasutysd" get s3://mybucket/internal_data/test.file
ERROR: Parameter problem: File ./test.file already exists. Use either of --force / --continue / --skip-existing or give it a new name

I use the policy that's in this article, just copy pasted it and "s3:GetObject" and "s3:GetObjectAcl" is in there and it's linked to the correct user.

edit

Of course! That's because the file exists in my local folder.
Get works too.
But my attachments are not.
 
Assuming the addon is installed correctly, it must be your config, which should be something like
Code:
$s3 = function () {
    return new \Aws\S3\S3Client(['credentials' => ['key' => 'AKIAIEMDJCEXXXXXXXX', 'secret' => 'Sq6ZX9c0a323jsQDgo+tXiidcnshdg5hfasutysd'], 'region' => 'REGION']);
};
$config['fsAdapters']['data'] = function () use ($s3) {
    return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'mybucket', 'data/');
};
$config['fsAdapters']['internal-data'] = function () use ($s3) {
    return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'mybucket', 'internal_data/');
};
Change REGION to match the region where your bucket is. If you are not sure what that is, run
Code:
s3cmd info s3://mybucket
and use the value in the 'location' field.
Note that with the folder location values, I have included the trailing forward slash, which I think is required in your case.
 
My config seems to be correct, I think the problem must be on S3 side.
This is what I have. I replaced the keys with XXX and my bucket name with mybucket to post this here.

PHP:
# Amazon S3 function
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'XXXXXXXXXXXXXXXXXXXX',
         'secret' => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
      ],
      'region' => 'eu-central-1',
      'version' => 'latest',
      'endpoint' => 'https://s3.eu-central-1.amazonaws.com'
   ]);
};
# Amazon S3 filesystem adaptor
$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'mybucket', 'data/');
};
# Amazon S3 ensure attachment and thumbnail urls are correct
$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://mybucket.s3.eu-central-1.amazonaws.com/data/' . $externalPath;
};
# add support for the internal_data directory, attachments and any other stuff that should be "private".
$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'mybucket', 'internal_data/');
};
 
Last edited:
I just disabled the add-on and enabled it again and then I got this.
Like you can see in the post above that line is in the config file.

Code:
Error: Class "League\Flysystem\AwsS3v3\AwsS3Adapter" not found in src/config.php at line 25
XF\App->{closure}() in src/XF/FsMounts.php at line 19
XF\FsMounts::loadDefaultMounts() in src/XF/App.php at line 1106
XF\App->XF\{closure}() in src/XF/Container.php at line 31
XF\Container->offsetGet() in src/XF/App.php at line 2595
XF\App->fs() in src/XF/Util/File.php at line 195
XF\Util\File::writeToAbstractedPath() in src/XF/Service/Template/Compile.php at line 146
XF\Service\Template\Compile->writeCompiled() in src/XF/Service/Template/Compile.php at line 43
XF\Service\Template\Compile->recompile() in src/XF/Entity/Template.php at line 435
XF\Entity\Template->_postSave() in src/XF/Mvc/Entity/Entity.php at line 1270
XF\Mvc\Entity\Entity->save() in src/XF/Service/Advertising/Writer.php at line 89
XF\Service\Advertising\Writer->write() in src/XF/Repository/Advertising.php at line 72
XF\Repository\Advertising->writeAdsTemplate() in src/XF/AddOn/DataType/AdvertisingPosition.php at line 100
XF\AddOn\DataType\AdvertisingPosition->XF\AddOn\DataType\{closure}() in src/XF.php at line 370
XF::triggerRunOnce() in src/XF/Mvc/Dispatcher.php at line 158
XF\Mvc\Dispatcher->dispatchLoop() in src/XF/Mvc/Dispatcher.php at line 57
XF\Mvc\Dispatcher->run() in src/XF/App.php at line 2351
XF\App->run() in src/XF.php at line 517
XF::runApp() in admin.php at line 13
 
Well that will happen while the add-on is disabled if those lines are in the config file. Disabling the add-on stops that class from loading.
 
Post edit -
S3cmd throwing this error on config -
ERROR: Test failed: 403 (AccessDenied): Access Denied
ERROR: Are you sure your keys have s3:ListAllMyBuckets permissions?

Thought that the s3:ListBuckets permission from the json should cover it?
 
Last edited:
Top Bottom