Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1 & XF 2.2

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1 & XF 2.2

No permission to download
@VersoBit and others, Can you share what you ended up doing to get Cloudfare R2 up and running fully for you. Looking at the most up to date add on, it doesn't appear any code changes are needed anymore. I think I'm getting tripped up with the end point, bucket, prefix fields. I've tried a lot of different combo's but, I'm getting this error you were getting when trying to stream an attachment on an otherwise public R2 bucket. I have the "domain connected" with cdn.mydomain.com and Thumbnails and avatars are viewable on the site and directly. I am also hosting the files in Linode's Object storage without issue and have just been using rclone to keep R2 up to date while I try to figure this out (keeping production mostly on Linode except during some testing).

Code:
Error executing "ListObjectsV2" on "https://<account-id-number>.r2.cloudflarestorage.com/?list-type=2&prefix=<bucketname>%2Finternal_data%2Fattachments%2F52%2F52920-09fa37e7cc527ec2a4a5e0bb8b6be2ae.data%2F&max-keys=1"; AWS HTTP error: Server error: `GET https://<account-id-number>.r2.cloudflarestorage.com/?list-type=2&prefix=<bucketname>%2Finternal_data%2Fattachments%2F52%2F52920-09fa37e7cc527ec2a4a5e0bb8b6be2ae.data%2F&max-keys=1` resulted in a `501 Not Implemented` response: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NotImplemented</Code><Message>ListBuckets search parameter list-type (truncated...) NotImplemented (server): ListBuckets search parameter list-type not implemented - <?xml version="1.0" encoding="UTF-8"?><Error><Code>NotImplemented</Code><Message>ListBuckets search parameter list-type not implemented</Message></Error> in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php at line 195

I'm giving up for now, but this is the last combo I tried. The error I'm getting doesn't make sense, because the add on uses ListObjectsV2 now and list-type should be supported....


Code:
$s3 = function()
{
   return new \Aws\S3\S3MultiRegionClient([
      'credentials' => [
         'key' => ‘***’,
         'secret' => ‘***’
      ],
      'version' => 'latest',
      'endpoint' => 'https://r2.cloudflarestorage.com'
   ]);
};

$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), ‘<accountid>’, ‘<bucket>/data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://cdn.***.com/data/' . $externalPath;
};

$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), ‘<accountid>, ‘<bucket>/internal_data');
};
 
I came to this thread to see if anyone was using R2. Low and behold... :)

@VersoBit and others, Can you share what you ended up doing to get Cloudfare R2 up and running fully for you. Looking at the most up to date add on, it doesn't appear any code changes are needed anymore. I think I'm getting tripped up with the end point, bucket, prefix fields. I've tried a lot of different combo's but, I'm getting this error you were getting when trying to stream an attachment on an otherwise public R2 bucket. I have the "domain connected" with cdn.mydomain.com and Thumbnails and avatars are viewable on the site and directly. I am also hosting the files in Linode's Object storage without issue and have just been using rclone to keep R2 up to date while I try to figure this out (keeping production mostly on Linode except during some testing).

Code:
Error executing "ListObjectsV2" on "https://<account-id-number>.r2.cloudflarestorage.com/?list-type=2&prefix=<bucketname>%2Finternal_data%2Fattachments%2F52%2F52920-09fa37e7cc527ec2a4a5e0bb8b6be2ae.data%2F&max-keys=1"; AWS HTTP error: Server error: `GET https://<account-id-number>.r2.cloudflarestorage.com/?list-type=2&prefix=<bucketname>%2Finternal_data%2Fattachments%2F52%2F52920-09fa37e7cc527ec2a4a5e0bb8b6be2ae.data%2F&max-keys=1` resulted in a `501 Not Implemented` response: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NotImplemented</Code><Message>ListBuckets search parameter list-type (truncated...) NotImplemented (server): ListBuckets search parameter list-type not implemented - <?xml version="1.0" encoding="UTF-8"?><Error><Code>NotImplemented</Code><Message>ListBuckets search parameter list-type not implemented</Message></Error> in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php at line 195

I'm giving up for now, but this is the last combo I tried. The error I'm getting doesn't make sense, because the add on uses ListObjectsV2 now and list-type should be supported....


Code:
$s3 = function()
{
   return new \Aws\S3\S3MultiRegionClient([
      'credentials' => [
         'key' => ‘***’,
         'secret' => ‘***’
      ],
      'version' => 'latest',
      'endpoint' => 'https://r2.cloudflarestorage.com'
   ]);
};

$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), ‘<accountid>’, ‘<bucket>/data');
};

$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://cdn.***.com/data/' . $externalPath;
};

$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), ‘<accountid>, ‘<bucket>/internal_data');
};
So I've been putting the finishing touches on an easy to use R2 system (you don't need to edit your config file, don't need to use adapters intended for S3, etc.)

It's all configured through the XenForo admin via backend API calls to Cloudflare (it can create buckets, add public domain, configure caching rules in Cloudflare, etc. with a single click).

1671690060765.webp

1671689849813.webp

It also does some trickery where you don't have to put all of internal_data on a single Flysystem adapter. For example in that screenshot, it puts all the data directory in one R2 bucket and just internal_data/attachments (but not all of internal_data in a different bucket (using different buckets because one is intended for public access and the other is not).

It also has a XenForo CLI command for migrating data into (and out of) R2, although it's not designed for a massive amount objects being migrated because Flysystem's listContents() method doesn't have paging (it loads info about all your files into a single array, which could be problematic for memory if you have a huge amount of files you are trying to move).

1671690554459.webp

Did I mention you don't need to edit anything in your config file?

Anyway... I'm looking for people who are using R2 or want to use R2 who would be interested in testing (mainly the auto-configuration portion). See this post:

 
Hey @digitalpoint, What great timing!

I installed your CF add on to my site a few days ago after running on CloudFare for a bit and deciding I was going to go all in. My Linode Object bucket was meant to be "temporary" but I had to get out of the filesystem because I was running out of space and didn't want to resize. Their bucket is cheap, pulls from the same egress budget I'm already using, and is in the same data center as my origin.

So, I'm interested in trying this out. Editing the config file isn't too scary ;) but who wants to do that if you don't have to. I think being able to split the adapters is really nice... I setup a WAF rule to block internal_data on the CDN subdomain, but making it private is even better. I'll take this discussion to the other thread. Thanks for jumping in.
 
I'm giving up for now, but this is the last combo I tried. The error I'm getting doesn't make sense, because the add on uses ListObjectsV2 now and list-type should be supported....
You're including your bucket name in the prefix. The bucket and prefix need to be separate. It looks like you're supplying your account ID where you should be supplying your bucket. Unless Cloudflare only allows one bucket per account, your account ID should be part of the endpoint, not the bucket. For example:
PHP:
return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), '<bucket>', 'data');
I could be completely wrong, but I'm guessing your current approach is resulting in weird issues. You're trying to call ListObjectsV2 on the main endpoint, not a bucket. Any error messages you receive are going to be unhelpful since it's the wrong URL.

Then again, the error message you posted contains a URL that doesn't actually match the code you provided. Either that error was from a different iteration, or I'm misunderstanding how Cloudflare R2's API works.
 
@PaulB I tried that, but the problem is that causes the end point to end up <bucketname>.<accountid>.r2.cloudfarestorage.com (which I believe throws 401). Whereas, cloudfare shows the API endpoint at <accountid>.r2.cloudfarestorage.com/<bucketname> on their field that lists the "S3 API" address.

Anyways, I have been working with @digitalpoint and have the R2 connection working the Alpha release of the DigitalPoint CloudFare addon. I'd still like to know what I am doing wrong here, since others seem to be able to get this to work.
 
@PaulB I tried that, but the problem is that causes the end point to end up <bucketname>.<accountid>.r2.cloudfarestorage.com (which I believe throws 401). Whereas, cloudfare shows the API endpoint at <accountid>.r2.cloudfarestorage.com/<bucketname> on their field that lists the "S3 API" address.

Anyways, I have been working with @digitalpoint and have the R2 connection working the Alpha release of the DigitalPoint CloudFare addon. I'd still like to know what I am doing wrong here, since others seem to be able to get this to work.
<bucketname>.<accountid>.r2.cloudfarestorage.com is in fact the correct endpoint (it's also what my add-on is using for R2 requests). However I've also seen lots of Cloudflare documentation referencing <accountid>.r2.cloudfarestorage.com/<bucketname>, so my guess is it can work either way.

A 401 error is going to be an authentication error of some sort (could be an issue with keys or it could also be an issue with signing the S3 request (which is really finicky about it being exact, otherwise it's signature is wrong). Took me awhile to really work out the S3 signing process for my add-on, so I know about that first hand (unfortunately).
 
<bucketname>.<accountid>.r2.cloudfarestorage.com is in fact the correct endpoint (it's also what my add-on is using for R2 requests). However I've also seen lots of Cloudflare documentation referencing <accountid>.r2.cloudfarestorage.com/<bucketname>, so my guess is it can work either way.
In the context of AWS S3, both formats are valid, but the latter is preferred in new code under most circumstances.

If you put the bucket in the prefix, you’re going to end up with bad requests and various bad responses. It does have to be separate. Despite looking like it can just be prepended to the URI, that isn’t the case.
 
In the context of AWS S3, both formats are valid, but the latter is preferred in new code under most circumstances.

If you put the bucket in the prefix, you’re going to end up with bad requests and various bad responses. It does have to be separate. Despite looking like it can just be prepended to the URI, that isn’t the case.
Makes sense... although I haven't run into any issue where bad requests/bad responses were happening.

Either way, when I was building my R2 API, I was running into some craziness with signing the S3 request and as part of doing a zillion things trying to figure out what I was missing with the signing process, I was flipping between the two endpoints. And the signing process coincidentally started working when I was using the <bucketname>.<accountid>.r2.cloudfarestorage.com endpoint. But I since sorted out what the signing issue was, and considering most of the Cloudflare documentation references the endpoint as <accountid>.r2.cloudfarestorage.com/<bucketname>, I switched endpoints for the API (and request signing/everything is still working), so seems prudent to leave it as that. But I've had zero issues using either endpoints myself (once I got the authentication/signing sorted out).
 
Last edited:
Updated from xf aws 2.1 to 2.3 and now I have an error in my webserver log:

Code:
2022/12/27 16:16:49 [error] 901252#901252: *861506 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Error: Class 'GuzzleHttp\Promise\Create' not found in ~/src/addons/XFAws/_vendor/aws/aws-sdk-php/src/DefaultsMode/ConfigurationProvider.php:126

Stack trace:

#0 ~/src/vendor/guzzlehttp/promises/src/Promise.php(203): Aws\DefaultsMode\ConfigurationProvider::Aws\DefaultsMode\{closure}()

#1 ~/src/vendor/guzzlehttp/promises/src/Promise.php(174): GuzzleHttp\Promise\Promise::callHandler()

#2 ~/src/vendor/guzzlehttp/promises/src/RejectedPromise.php(40): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}()

#3 ~/src/vendor/guzzlehttp/promises/src/TaskQueue.php(47): GuzzleHttp\Promise\RejectedPromise::GuzzleHttp\Promise\{closure}()

#4 ~/src/vendor/guzzlehttp/promises/src/Promise.php(246): GuzzleHttp\Promise\TaskQueue->run()

#5 ~/src/vendor/guzzlehttp/promises/src/Promise.php(223): GuzzleHttp\Promise\Promise->invokeWaitFn()

#6 /var/www/html/...PHP message: PHP Fatal error:  Uncaught Error: Class 'GuzzleHttp\Promise\Create' not found in ~/src/addons/XFAws/_vendor/aws/aws-sdk-php/src/DefaultsMode/ConfigurationProvider.php:126

Stack trace:

#0 ~/src/vendor/guzzlehttp/promises/src/Promise.php(203): Aws\DefaultsMode\ConfigurationProvider::Aws\DefaultsMode\{closure}()

#1 ~/src/vendor/guzzlehttp/promises/src/Promise.php(174): GuzzleHttp\Promise\Promise::callHandler()

#2 ~/src/vendor/guzzlehttp/promises/src/RejectedPromise.php(40): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}()

#3 ~/src/vendor/guzzlehttp/promises/src/TaskQueue.php(47): GuzzleHttp\Promise\RejectedPromise::GuzzleHttp\Promise\{closure}()

#4 ~/src/vendor/guzzlehttp/promises/src/Promise.php(246): GuzzleHttp\Promise\TaskQueue->run()

#5 ~/src/vendor/guzzlehttp/promises/src/Promise.php(223): Guzz
Probably lack of PHP 8
 
Last edited:
Updated to PHP 8.1 and still have the error.

[error] FastCGI sent in stderr: "
PHP message: PHP Deprecated: Return type of XF\App::eek:ffsetExists($key) should either be compatible with ArrayAccess::eek:ffsetExists(mixed $offset): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice in ~/src/XF/App.php on line 2353
PHP message: PHP Deprecated: Return type of XF\PreEscaped::jsonSerialize() should either be compatible with JsonSerializable::jsonSerialize(): mixed, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice in ~/src/XF/PreEscaped.php on line 21
PHP message: PHP Fatal error: During inheritance of JsonSerializable: Uncaught Error: Class "GuzzleHttp\Promise\Create" not found in ~/src/addons/XFAws/_vendor/aws/aws-sdk-php/src/DefaultsMode/ConfigurationProvider.php:126
Stack trace:
#0 ~/src/vendor/guzzlehttp/promises/src/Promise.php(203): Aws\DefaultsMode\ConfigurationProvider::Aws\DefaultsMode\{closure}()
#1 ~/src/vendor/guzzlehttp/promises/src/Promise.php(174): GuzzleHttp\Promise\Promise::callHandler()
#2 ~/src/vendor/guzzlehttp/promises/src/RejectedPromise.php(40): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}()
#3 ~/src/vendor/guzzlehttp/promises/src/TaskQueue.php(47): GuzzleHttp\Promise\RejectedPromise::GuzzleHttp\Promise\{closure}()
#4 ~/src/vendor/guzzlehttp/promises/src/Promise.php(246): GuzzleHttp\Promise\TaskQueue->run()
#5 ~/src/vendor/guzzlehttp/promises/src/Promise.php(223): GuzzleHttp\Promise\P...PHP message: PHP Fatal error: Uncaught Error: Class "GuzzleHttp\Promise\Create" not found in ~/src/addons/XFAws/_vendor/aws/aws-sdk-php/src/DefaultsMode/ConfigurationProvider.php:126
Stack trace:
#0 ~/src/vendor/guzzlehttp/promises/src/Promise.php(203): Aws\DefaultsMode\ConfigurationProvider::Aws\DefaultsMode\{closure}()
#1 ~/

Reinstalling the addon didn't help.
 
Last edited:
I realized something with the R2 adapter I built (it's also going to apply to S3 and any other adapter you use with XenForo). I ended up making an option where you can see the underlying logs of API calls to R2. As part of that, I discovered that XenForo is using Flysystem's default assert mode when I saw that every time a \XF::fs()->read() (or any method really), it's first making an API call to see if the file exists before the read actually happens.

Long story short is backend API calls being made are effectively doubled (can see a couple entries from my log):

DateStatusStatus codeTypeBucketObject
Yesterday at 9:42 PMsuccess200GetObjectiolabs-attachmentsattachments/2/2113-ca80c577ac5014d9b8e64150f4e00895.data
Yesterday at 9:42 PMsuccess200HeadObjectiolabs-attachmentsattachments/2/2113-ca80c577ac5014d9b8e64150f4e00895.data
Yesterday at 6:57 PMsuccess200GetObjectiolabs-attachmentsattachments/2/2113-ca80c577ac5014d9b8e64150f4e00895.data
Yesterday at 6:57 PMsuccess200HeadObjectiolabs-attachmentsattachments/2/2113-ca80c577ac5014d9b8e64150f4e00895.data

I've ended up working around it with my R2 adapter, but just something to be wary of for everyone using S3 adapters (double the API calls means twice the cost if you are paying per API request and also slower for users).

After the change in my adapter, now the logs look like this (no extra API call every time we read something):

DateStatusStatus codeTypeBucketObject
Today at 11:44 AMsuccess200GetObjectiolabs-attachmentsattachments/20/20335-01d9059fd84d920c09f6f0cd812eea2f.data
Today at 11:44 AMsuccess200GetObjectiolabs-attachmentsattachments/20/20333-9d90fcff877a962bebe7ef154ecc2607.data
Today at 11:37 AMsuccess200GetObjectiolabs-attachmentsattachments/20/20333-9d90fcff877a962bebe7ef154ecc2607.data
Today at 11:37 AMsuccess200GetObjectiolabs-attachmentsattachments/20/20342-49be5d3588123ece1984d197f82cb29e.data
Today at 11:32 AMsuccess200GetObjectiolabs-attachmentsattachments/20/20344-9f54237be24a11786626e63f4d5130a7.data
Today at 11:31 AMsuccess200GetObjectiolabs-attachmentsattachments/20/20344-9f54237be24a11786626e63f4d5130a7.data

Anyway, just FYI...
 
Last edited:
I don't personally use S3, but if an add-on developer wanted to make S3 configuration easier (through options rather than config edit), use less API calls (disabling assert mode for that adapter) as well as being able to store just certain folders in S3 (for example attachments, but not file_check or DKIM keys), I made a simple add-on that was mainly intended as a demo of how to do that. The add-on itself lets you store just internal-data://keys/ in XenForo's data registry via a Data Registry adapter I made.

Anyway, might be useful if someone wanted to make S3 easier/more efficient to use with XenForo...

 
I'm trying to set up my S3 bucket but I'm getting the following error

Aws\S3\Exception\S3Exception: Error executing "PutObject" on "https://togxen.s3.eu-north-1.amazonaws.com/data/avatars/o/0/1.jpg"; AWS HTTP error: Client error: `PUT https://togxen.s3.eu-north-1.amazonaws.com/data/avatars/o/0/1.jpg` resulted in a `400 Bad Request` response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessControlListNotSupported</Code><Message>The bucket does not all (truncated...) AccessControlListNotSupported (client): The bucket does not allow ACLs - <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessControlListNotSupported</Code><Message>The bucket does not allow ACLs</Message><RequestId>E9SMJF1HQFGPW2EJ</RequestId><HostId>d5c+6OktWZsSg2HdoIszRwHlXaDxpsOAxLX9naFgjodeaRv6+r1c+gFpstgajqXAlo8OU8kZxq6fsYuDvJATcw==</HostId></Error> in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php at line 195 Aws\WrappedHttpHandler->parseError() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php at line 97 Aws\WrappedHttpHandler->Aws\{closure}() in src/vendor/guzzlehttp/promises/src/Promise.php at line 204 GuzzleHttp\Promise\Promise::callHandler() in src/vendor/guzzlehttp/promises/src/Promise.php at line 169 GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}() in src/vendor/guzzlehttp/promises/src/RejectedPromise.php at line 42 GuzzleHttp\Promise\RejectedPromise::GuzzleHttp\Promise\{closure}() in src/vendor/guzzlehttp/promises/src/TaskQueue.php at line 48 GuzzleHttp\Promise\TaskQueue->run() in src/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php at line 118 GuzzleHttp\Handler\CurlMultiHandler->tick() in src/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php at line 145 GuzzleHttp\Handler\CurlMultiHandler->execute() in src/vendor/guzzlehttp/promises/src/Promise.php at line 248 GuzzleHttp\Promise\Promise->invokeWaitFn() in src/vendor/guzzlehttp/promises/src/Promise.php at line 224 GuzzleHttp\Promise\Promise->waitIfPending() in src/vendor/guzzlehttp/promises/src/Promise.php at line 269 GuzzleHttp\Promise\Promise->invokeWaitList() in src/vendor/guzzlehttp/promises/src/Promise.php at line 226 GuzzleHttp\Promise\Promise->waitIfPending() in src/vendor/guzzlehttp/promises/src/Promise.php at line 269 GuzzleHttp\Promise\Promise->invokeWaitList() in src/vendor/guzzlehttp/promises/src/Promise.php at line 226 GuzzleHttp\Promise\Promise->waitIfPending() in src/vendor/guzzlehttp/promises/src/Promise.php at line 62 GuzzleHttp\Promise\Promise->wait() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/S3/S3ClientTrait.php at line 35 Aws\S3\S3Client->upload() in src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php at line 607 League\Flysystem\AwsS3v3\AwsS3Adapter->upload() in src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php at line 392 League\Flysystem\AwsS3v3\AwsS3Adapter->writeStream() in src/vendor/league/flysystem/src/Filesystem.php at line 122 League\Flysystem\Filesystem->putStream() call_user_func_array() in src/vendor/league/flysystem-eventable-filesystem/src/EventableFilesystem.php at line 431 League\Flysystem\EventableFilesystem\EventableFilesystem->callFilesystemMethod() in src/vendor/league/flysystem-eventable-filesystem/src/EventableFilesystem.php at line 395 League\Flysystem\EventableFilesystem\EventableFilesystem->delegateMethodCall() in src/vendor/league/flysystem-eventable-filesystem/src/EventableFilesystem.php at line 71 League\Flysystem\EventableFilesystem\EventableFilesystem->putStream() in src/vendor/league/flysystem/src/MountManager.php at line 615 League\Flysystem\MountManager->putStream() in src/XF/Util/File.php at line 187 XF\Util\File::copyFileToAbstractedPath() in src/XF/Service/User/Avatar.php at line 273 XF\Service\User\Avatar->updateAvatar() in src/XF/Pub/Controller/Account.php at line 555 XF\Pub\Controller\Account->actionAvatar() in src/XF/Mvc/Dispatcher.php at line 352 XF\Mvc\Dispatcher->dispatchClass() in src/XF/Mvc/Dispatcher.php at line 259 XF\Mvc\Dispatcher->dispatchFromMatch() in src/XF/Mvc/Dispatcher.php at line 115 XF\Mvc\Dispatcher->dispatchLoop() in src/XF/Mvc/Dispatcher.php at line 57 XF\Mvc\Dispatcher->run() in src/XF/App.php at line 2353 XF\App->run() in src/XF.php at line 524 XF::runApp() in index.php at line 20

Using the following options where my bucket name is togxen and I'm in eu-north-1. I have also tried changing the endpoint to the bucket url minus the region, as in the return line at end of the code.

//Amazon S3 configuration $s3 = function() { return new \Aws\S3\S3Client([ 'credentials' => [ 'key' => '*****', 'secret' => '*****' ], 'region' => 'eu-north-1', 'version' => 'latest', 'endpoint' => 'https://s3.eu-north-1.amazonaws.com' ]); }; $config['fsAdapters']['data'] = function() use($s3) { return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'togxen', 'data'); }; $config['externalDataUrl'] = function($externalPath, $canonical) { return 'https://s3.console.aws.amazon.com/s3/buckets/togxen/data/' . $externalPath; };
 
Managed to xenForo 2 to use AWS/S3, but has anyone managed to get it to work with Cloudfront? We're currently using AttachmentStore with Cloudfront + S3 so ideally looking to replicate that setup in xf2.
 
Hi,

This is a weird bug that happens very randomly. We cannot find a specific pattern. The website can run for weeks without issue, and then this can happen 2 times in a single day.

Xenforo goes completely of and this error appears:

Code:
[04-Feb-2023 18:58:04 UTC] PHP Fatal error:  Uncaught Error: Class 'League\Flysystem\AwsS3v3\AwsS3Adapter' not found in /www/wwwroot/****.com/forum/src/config.php:42
Stack trace:
#0 /www/wwwroot/****.com/forum/src/XF/FsMounts.php(19): XF\App->{closure}()
#1 /www/wwwroot/****.com/forum/src/XF/App.php(1100): XF\FsMounts::loadDefaultMounts(Array)
#2 /www/wwwroot/****.com/forum/src/XF/Container.php(30): XF\App->XF\{closure}(Object(XF\Container))
#3 /www/wwwroot/****.com/forum/src/XF/App.php(2585): XF\Container->offsetGet('fs')
#4 /www/wwwroot/****.com/forum/src/XF.php(932): XF\App->fs()
#5 /www/wwwroot/****.com/forum/src/XF/Util/File.php(740): XF::fs()
#6 /www/wwwroot/****.com/forum/src/XF/Error.php(102): XF\Util\File::installLockExists()
#7 /www/wwwroot/****.com/forum/src/XF/App.php(2356): XF\Error->logException(Object(ErrorException), true, '')
#8 /www/wwwroot/****.com/forum/src/XF.php(236): XF\App->logException(Object(ErrorException), true)
#9 [internal function]: XF::handleFatalError()
#10 {main}
  thrown in /www/wwwroot/****.com/forum/src/config.php on line 42

The only thing that fixes it is to actually restart MySQL!


Any ideas?
 
Hi,

This is a weird bug that happens very randomly. We cannot find a specific pattern. The website can run for weeks without issue, and then this can happen 2 times in a single day.

Xenforo goes completely of and this error appears:

Code:
[04-Feb-2023 18:58:04 UTC] PHP Fatal error:  Uncaught Error: Class 'League\Flysystem\AwsS3v3\AwsS3Adapter' not found in /www/wwwroot/****.com/forum/src/config.php:42
Stack trace:
#0 /www/wwwroot/****.com/forum/src/XF/FsMounts.php(19): XF\App->{closure}()
#1 /www/wwwroot/****.com/forum/src/XF/App.php(1100): XF\FsMounts::loadDefaultMounts(Array)
#2 /www/wwwroot/****.com/forum/src/XF/Container.php(30): XF\App->XF\{closure}(Object(XF\Container))
#3 /www/wwwroot/****.com/forum/src/XF/App.php(2585): XF\Container->offsetGet('fs')
#4 /www/wwwroot/****.com/forum/src/XF.php(932): XF\App->fs()
#5 /www/wwwroot/****.com/forum/src/XF/Util/File.php(740): XF::fs()
#6 /www/wwwroot/****.com/forum/src/XF/Error.php(102): XF\Util\File::installLockExists()
#7 /www/wwwroot/****.com/forum/src/XF/App.php(2356): XF\Error->logException(Object(ErrorException), true, '')
#8 /www/wwwroot/****.com/forum/src/XF.php(236): XF\App->logException(Object(ErrorException), true)
#9 [internal function]: XF::handleFatalError()
#10 {main}
  thrown in /www/wwwroot/****.com/forum/src/config.php on line 42

The only thing that fixes it is to actually restart MySQL!


Any ideas?
Very strange indeed. Can you provide some info on how you restart MySQL? It sounds very unrelated so it sounds like you are restarting more than just MySQL.
 
Hi, I'm simply rebooting the MySQL service on aaPanel. I'm pretty sure that only MySQL gets rebooted.

This error drives us nuts. It feels like a low-level error.
 
The S3 console has changed somewhat since the FAQ was written. There seems to be no 'Programmatic' option, and the way of setting up an ACL access seems a bit different.

My attempts to create a private bucket and use policy script shown did not work, and returned the "guzzle" issue. So I deleted the original bucket, set another up with the script. It will only work when the bucket is public, and access is through the policy. Not really an issue now, but Amazon just sent news today that effective April 2023, there will some substantial changes in the wind.

I just wish I knew what that really was... Any thoughts? Here is the text of the email I received:

Hello,

We are reaching out to inform you that starting in April 2023 Amazon S3 will change the default security configuration for all new S3 buckets. For new buckets created after this date, S3 Block Public Access will be enabled, and S3 access control lists (ACLs) will be disabled.

The majority of S3 use cases do not need public access or ACLs. For most customers, no action is required. If you have use cases for public bucket access or the use of ACLs, you can disable Block Public Access or enable ACLs after you create an S3 bucket. In these cases, you may need to update automation scripts, CloudFormation templates, or other infrastructure configuration tools to configure these settings. To learn more, read the AWS News blog [1] and What's New announcement [2] on this change or visit our user guide for S3 Block Public Access [3] and S3 Object Ownership to disable ACLs [4]. Also, see our user guide for AWS CloudFormation on these settings [5][6].

If you have any questions or concerns, please reach out to AWS Support [7].

[1] https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/
[2] https://aws.amazon.com/about-aws/wh...able-access-control-lists-buckets-april-2023/
[3] https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html
[4] https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html
[5] https://docs.aws.amazon.com/AWSClou...s3-bucket-publicaccessblockconfiguration.html
[6] https://docs.aws.amazon.com/AWSClou...s-properties-s3-bucket-ownershipcontrols.html
[7] https://aws.amazon.com/support

Sincerely,
Amazon Web Services
 
Last edited:
Back
Top Bottom