Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1+

No permission to download
You do not need this to be installed any more starting with XF 2.3.

Just uninstall.

This isn't and hasn't ever been a "mod". It simply provides the AWS SDK. This is now included with XF 2.3 by default (just the S3 bits).

To clarify, uninstall the mod after the XF 2.3 upgrade is complete (not before).
Awesome! Big step forward! :)

Thanks Chris!
 
If you're like me and had the addon installed (even while reading Chris' comment at the same time as hitting the upgrade cmd 😭) and upgraded, you'll find yourself unable to log into ACP. To fix just remove via terminal

Bash:
php cmd.php xf:addon-uninstall XFAws
 
Can someone help me, its probably something stupid I am doing...

But moving to a new ubuntu 22.04 server...

Getting the following error in Xenforo. "Fatal error: Uncaught Error: Class "League\Flysystem\AwsS3v3\AwsS3Adapter" not found"

I installed the awscli by doing apt install awscli

It installed, but I am still getting the same error.

So followed the instructions at https://www.xda-developers.com/how-install-aws-cli-ubuntu/ and installed it.

If I run
aws --version

I get the following...
aws-cli/2.15.34 Python/3.11.8 Linux/5.4.0-174-generic exe/x86_64.ubuntu.20 prompt/off

Yet the AwsS3Adapter error remains.

All my attachments are in the cloud so I don't want to turn this off.

Can anyone PLEASE help. Thanks!
 
  • Aws\S3\Exception\S3Exception: Error executing "PutObject" on "https://cdnnetwork.s3.us-west-004.b...s/0/195-6c209b1cab08f2734b0c10c5f65f8a98.data"; AWS HTTP error: Client error: PUT https://cdnnetwork.s3.us-west-004.backblazeb2.com/internal_data/attachments/0/195-6c209b1cab08f2734b0c10c5f65f8a98.data resulted in a 400 Bad Request response: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Error> <Code>InvalidArgument</Code> <Message>Unsupporte (truncated...) InvalidArgument (client): Unsupported value for canned acl 'public-read' - <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Error> <Code>InvalidArgument</Code> <Message>Unsupported value for canned acl 'public-read'</Message> </Error>
  • src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php:195

Has anyone seen and resolved or have an idea how to resolve this?

I know it's not technically a supported feature, but it is very close to working with Backblaze b2 & Cloudflare for a really fast and affordable block store combo. This happens when I put the bucket to private, there's some parts trying to upload to public-read. It works perfectly fine when the bucket is public but would like that layer of security.


Thank you.
 
Has anyone seen and resolved or have an idea how to resolve this?

I know it's not technically a supported feature, but it is very close to working with Backblaze b2 & Cloudflare for a really fast and affordable block store combo. This happens when I put the bucket to private, there's some parts trying to upload to public-read. It works perfectly fine when the bucket is public but would like that layer of security.


Thank you.
Its been a while since I have played with S3, but I think if you have a public S3, then set to private, the permissions need to be updated on all the files.
You also need to upload with the "private-read" tag not public when uploading to private S3 bucket.

Again, it has been a while. Take what I say with a pinch of salt.
 
Its been a while since I have played with S3, but I think if you have a public S3, then set to private, the permissions need to be updated on all the files.
You also need to upload with the "private-read" tag not public when uploading to private S3 bucket.

Again, it has been a while. Take what I say with a pinch of salt.
Thanks for the reply.

This is for a backblaze bucket that has rudimentary ACL. The bucket is either public or private. I'm just not sure how to switch the client on the xenforo side to use private vs public-read as you mentioned. Seems like a small adjustment but I'm not sure where.
 
did you install the plugin to xf acp?
Yes its installed.

Again I am transferring over my site to a brand new server. Same OS (AlmaLinux 8) same PHP (Php 8.2) Same MariaDB software and version (I think its 4.6) but a lot more powerful servers with 16 core CPU instead or 8, and a larger NVME hard drive. :D

Maybe I installed the Amazon CLI wrong...

The instructions I used were from


If I go and run aws --version on the new server I get...

aws-cli/2.15.34 Python/3.11.8 Linux/4.18.0-513.18.1.el8_9.x86_64 exe/x86_64.almalinux.8 prompt/off

Yes I am still getting the same error from Xenforo.

PHP Fatal error: Uncaught Error: Class "League\Flysystem\AwsS3v3\AwsS3Adapter" not found in /home/move/public_html/xen/src/config.php:90

I suspect I didn't install the amazon stuff correctly.

Did I install the wrong thing? Does anyone have linux instructions for how to install what I need they can provide. I am not even sure what I am supposed to be installing to be honest.

Right now I am just trying to move this to test the server and have everything running for when I do the live migrations later in the week. After this is fixed since both server configs and all settings are the same everything should work. :D
 
you don't need to install aws tools like that at all.... it's part of the xf addon and is installed in the Flysystem dir as part of the addon.

it looks like your config file has a bad path to the class or bucket.

what does your config file look like?

here's mine:

Code:
//S3 Offloading extra config
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'longkeyhere',
         'secret' => 'longsecrethere'
      ],
      'region' => 'us-east-1',
      'version' => 'latest',
      'endpoint' => 'https://s3.us-east-1.amazonaws.com'
   ]);
};

//public data
$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'mybucket', 'data');
};

//private data
$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'mybucket', 'pdata');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   //return 'https://mybucket.s3.us-east-1.amazonaws.com/data/' . $externalPath;
// use above without cloudfront, or use the cloudfront dns like
   return 'https://cdn.mydomain.com/data/' . $externalPath;
};
 
I contacted the a who originally set this up for me for help. Unfortunately he is on vacation until the 16th. (Probably a very well deserved vacation!)

Here s from my config.php

Code:
// BEGIN DIGITAL OCEAN SUPPORT

$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'DO000000000000000WM',
         'secret' => 'VFtzE000000000000000000000000/e+sZB0c0HJ4'
      ],
      'region' => 'nyc33',
      'version' => 'latest',
      'endpoint' => 'https://nyc3.digitaloceanspaces.com'
   ]);
};


$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'satguys', 'data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://satguys.nyc3.cdn.digitaloceanspaces.com/data/' . $externalPath;
};

$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'satguys', 'internal_data');
};

// END DIGITAL OCEAN SUPPORT

This is still working fine on the old server at https://www.satelliteguys.us but giving me a 500 error at https://move.satelliteguys.us/xen/admin.php (have to go to admin.php as I didn't change the board URL before I transferred everything over.)
 
And looking into /src/addons the files from the addon are there.

Code:
[root@speedy XFAws]# ls -l
total 96
-rw-r--r-- 1 move move   619 Nov 10  2022 addon.json
-rw-r--r-- 1 move move   411 Nov 10  2022 composer.json
-rw-r--r-- 1 move move 13231 Nov 10  2022 composer.lock
drwxr-xr-x 2 move move  4096 Nov 10  2022 _data
-rw-r--r-- 1 move move 62753 Dec  6  2022 hashes.json
-rw-r--r-- 1 move move  2077 Nov 10  2022 icon.png
drwxr-xr-x 8 move move   110 Nov 10  2022 _vendor
[root@speedy XFAws]#
 
config looks fine. it points at the files not being there... Are you sure you have it /src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php
?

it might be a path issue due to the new subdomain to move.you.com
 
It is there. Not sure about the permissions but they should have transferred over.
 

Attachments

  • IMG_1415.webp
    IMG_1415.webp
    136.8 KB · Views: 7
Brian thanks for your help so far. I need to run as something blew up here at the office that requires me attention.

I do hope I can figure this out. :)
 
It is there. Not sure about the permissions but they should have transferred over.
You might want to do and ls on that directory for us and spit out the contents in your xenforo public_html root folder

Your files are owned by move move. Is that correct? Does that have the right permissions? Do the same ls command on your source server and compare


Code:
ls -l src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php
-rw-r--r-- 1 www www 17541 Mar 31 19:19 src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php
 
Does anyone know why the addon uses s3_get/head_object for? I'm seeing this rise quite a bit.

Code:
S3 Get Objects3_get_object    3,714 
S3 Head Objects3_head_object  5,458
 
I got it!

Wasn't an issue with aws or this plugin or anything, turns out that Cpanel didn't import the database correctly even though it said it did.

I did a manual dump and import and things are working fine now. Looks like I will be moving the site for real tomorrow night. :D

https://move.satelliteguys.us is the temporary test site.

Thanks to everyone who tried to help.
 
Hello,

I asked this in the module forum but perhaps I should ask in the customer support forum since this is a xenforo addon. My question is mainly how the add-on uses following calls. I'd like some clarification so I can understand what the application is doing and when it does it.

Thank you!

Does anyone know why the addon uses s3_get/head_object for? I'm seeing this rise quite a bit.

Code:
S3 Get Objects3_get_object 3,714
S3 Head Objects3_head_object 5,458
 
From a REST protocol perspective, head simply gets the meta data of an object (size, properties, etc).
Get returns the stuff from head plus the object itself.

How the addon actually uses it, i'd need to research, but it's likely that head requests are done in a rebuild data call, or some other lookup ABOUT the image, but not actually bringing the image back.

It's also possible that your bucket security needs to be looked at.
 
Top Bottom