Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1 & XF 2.2

Using DigitalOcean Spaces or Amazon S3 for file storage in XF 2.1 & XF 2.2

No permission to download
Good afternoon. I'm trying to upload a file with a size of 7 gigabytes. The server and the forum are set up as it should, according to their sizes and minimums, only when the download is completed it writes the phrase "uploaded_file_failed_cant_write". I use DigitalOcean as my remote storage. Please tell me what could be the problem?
 
Good afternoon. I'm trying to upload a file with a size of 7 gigabytes. The server and the forum are set up as it should, according to their sizes and minimums, only when the download is completed it writes the phrase "uploaded_file_failed_cant_write". I use DigitalOcean as my remote storage. Please tell me what could be the problem?
Spaces have the following file size limits: PUT requests can be at most 5 GB. Each part of a multi-part upload can be at most 5 GB. Each part of a multi-part upload must be at least 5 MiB, except for the final part.

You'll likely need to configure Multi-Part upload in your configuration block for large files like this.
 
Spaces have the following file size limits: PUT requests can be at most 5 GB. Each part of a multi-part upload can be at most 5 GB. Each part of a multi-part upload must be at least 5 MiB, except for the final part.

You'll likely need to configure Multi-Part upload in your configuration block for large files like this.
Does this need to be configured on the forum itself or in the DO(CORS Configurations) space settings?
 
There's an addon here for chunked uploads
 
There's an addon here for chunked uploads
This only chunks the upload from client to server, not from server to s3.

Be aware if you use cloudflare you will frequently time out on very large files as the chunking process will take longer than 100s total
 
Hey, everybody.
I configured S3 to work with Yandex.Cloud. Everything works, no problems.
Code:
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => '**************',
         'secret' => '**************'
      ],
      'region' => 'ru-central1',
      'version' => 'latest',
      'endpoint' => 'https://storage.yandexcloud.net'
   ]);
};
$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'bucket', 'data');
};
$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://bucket.storage.yandexcloud.net/data/' . $externalPath;
};
$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'bucket', 'internal_data');
};
There is misunderstood case. One person's path is "website.yandexcloud.net" and he has thumbnails of images after loading:
Code:
$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://bucket.website.yandexcloud.net/data/' . $externalPath;
};
In my case, I have no thumbnails after loading with this path "website.yandexcloud.net". However, when I write the path "storage.yandexcloud.net", everything is fine:
Code:
$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://bucket.storage.yandexcloud.net/data/' . $externalPath;
};
What could this peculiarity be? There is a suspicion that the region affects, but it is not certain. The question is purely for the sake of interest, why it could be so. Anything known about this kind of thing?
 
Last edited:
Anyone got this working with Contabo Object Storage?
All I have is Key, Secret, and this info on the screenshot:
1662987677570.png

While testing, I got this error:
PHP:
Aws\S3\Exception\S3Exception: Error executing "PutObject" on "https://xfdata.usc1.contabostorage.com/data/avatars/o/0/1.jpg";
AWS HTTP error: cURL error 6: Could not resolve host: xfdata.usc1.contabostorage.com; Unknown error

I think bucket name "xfdata" should not be use on the host/url.
 
so cloudflare r2 is out of beta and they have finally made available public buckets which i imagine should remove the need for additional code to make it work with xenforo. though i am not sure if their have all the s3 api compatible on r2 needed for xenforo. would love to see anyone here who might give it a try.

Public Buckets · Cloudflare R2 docs

R2 is now Generally Available

update. gave it a go. couldn't get it to work. has been long time i had s3 working through this addon so either i made a mistake or there are api compatibility issues still. would wait for someone else to share their experiences.
 
Last edited:
I can get it working until you try to upload an attachment:

Code:
Aws\S3\Exception\S3Exception: Error executing "PutObject" on "https://REDACTED.r2.cloudflarestorage.com/internal_data/attachments/23/23612-4ee7121e10de557ec13b4f9d410145f3.data"; AWS HTTP error: Server error: `PUT https://REDACTED.r2.cloudflarestorage.com/internal_data/attachments/23/23612-4ee7121e10de557ec13b4f9d410145f3.data` resulted in a `501 Not Implemented` response: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NotImplemented</Code><Message>Header &apos;x-amz-acl&apos; with value (truncated...) NotImplemented (server): Header 'x-amz-acl' with value 'public-read' not implemented - <?xml version="1.0" encoding="UTF-8"?><Error><Code>NotImplemented</Code><Message>Header &apos;x-amz-acl&apos; with value &apos;public-read&apos; not implemented</Message></Error> in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php at line 195
Aws\WrappedHttpHandler->parseError() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php at line 97
Aws\WrappedHttpHandler->Aws\{closure}() in src/vendor/guzzlehttp/promises/src/Promise.php at line 204
GuzzleHttp\Promise\Promise::callHandler() in src/vendor/guzzlehttp/promises/src/Promise.php at line 169
GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}() in src/vendor/guzzlehttp/promises/src/RejectedPromise.php at line 42
GuzzleHttp\Promise\RejectedPromise::GuzzleHttp\Promise\{closure}() in src/vendor/guzzlehttp/promises/src/TaskQueue.php at line 48
GuzzleHttp\Promise\TaskQueue->run() in src/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php at line 118
GuzzleHttp\Handler\CurlMultiHandler->tick() in src/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php at line 145
GuzzleHttp\Handler\CurlMultiHandler->execute() in src/vendor/guzzlehttp/promises/src/Promise.php at line 248
GuzzleHttp\Promise\Promise->invokeWaitFn() in src/vendor/guzzlehttp/promises/src/Promise.php at line 224
GuzzleHttp\Promise\Promise->waitIfPending() in src/vendor/guzzlehttp/promises/src/Promise.php at line 269
GuzzleHttp\Promise\Promise->invokeWaitList() in src/vendor/guzzlehttp/promises/src/Promise.php at line 226
GuzzleHttp\Promise\Promise->waitIfPending() in src/vendor/guzzlehttp/promises/src/Promise.php at line 269
GuzzleHttp\Promise\Promise->invokeWaitList() in src/vendor/guzzlehttp/promises/src/Promise.php at line 226
GuzzleHttp\Promise\Promise->waitIfPending() in src/vendor/guzzlehttp/promises/src/Promise.php at line 62
GuzzleHttp\Promise\Promise->wait() in src/addons/XFAws/_vendor/aws/aws-sdk-php/src/S3/S3ClientTrait.php at line 35
Aws\S3\S3Client->upload() in src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php at line 607
League\Flysystem\AwsS3v3\AwsS3Adapter->upload() in src/addons/XFAws/_vendor/league/flysystem-aws-s3-v3/src/AwsS3Adapter.php at line 392
League\Flysystem\AwsS3v3\AwsS3Adapter->writeStream() in src/vendor/league/flysystem/src/Filesystem.php at line 122
League\Flysystem\Filesystem->putStream()
call_user_func_array() in src/vendor/league/flysystem-eventable-filesystem/src/EventableFilesystem.php at line 431
League\Flysystem\EventableFilesystem\EventableFilesystem->callFilesystemMethod() in src/vendor/league/flysystem-eventable-filesystem/src/EventableFilesystem.php at line 395
League\Flysystem\EventableFilesystem\EventableFilesystem->delegateMethodCall() in src/vendor/league/flysystem-eventable-filesystem/src/EventableFilesystem.php at line 71
League\Flysystem\EventableFilesystem\EventableFilesystem->putStream() in src/vendor/league/flysystem/src/MountManager.php at line 615
League\Flysystem\MountManager->putStream() in src/XF/Util/File.php at line 187
XF\Util\File::copyFileToAbstractedPath() in src/XF/Service/Attachment/Preparer.php at line 78
XF\Service\Attachment\Preparer->insertDataFromFile() in src/addons/SV/AttachmentImprovements/XF/Service/Attachment/Preparer.php at line 74
SV\AttachmentImprovements\XF\Service\Attachment\Preparer->insertDataFromFile() in src/XF/Service/Attachment/Preparer.php at line 38
XF\Service\Attachment\Preparer->insertAttachment() in src/XF/Attachment/Manipulator.php at line 199
XF\Attachment\Manipulator->insertAttachmentFromUpload() in src/XF/Pub/Controller/Attachment.php at line 91
XF\Pub\Controller\Attachment->actionUpload() in src/XF/Mvc/Dispatcher.php at line 352
XF\Mvc\Dispatcher->dispatchClass() in src/XF/Mvc/Dispatcher.php at line 259
XF\Mvc\Dispatcher->dispatchFromMatch() in src/XF/Mvc/Dispatcher.php at line 115
XF\Mvc\Dispatcher->dispatchLoop() in src/XF/Mvc/Dispatcher.php at line 57
XF\Mvc\Dispatcher->run() in src/XF/App.php at line 2353
XF\App->run() in src/XF.php at line 524
XF::runApp() in index.php at line 20

You need to use this fix here:

Once that is done, it works.
 
Oh would give it another try! That fix is for when r2 had private only buckets. I guess there are still r2 S3 compatibility issues!

Not sure if it's a good idea to move to r2 with these unofficial fixes to addon code. Thanks though!
 
Oh would give it another try! That fix is for when r2 had private only buckets. I guess there are still r2 S3 compatibility issues!

Not sure if it's a good idea to move to r2 with these unofficial fixes to addon code. Thanks though!
It states in the documentation that x-amz-acl on the PutObject API function isn't implemented:

 
thanks! so there's hope that it might get supported eventually and could work in future without any manual code changes.

tried it and yes it does work! i was able to upload. haven't tried migrating data to r2 to get everything else working. but still very nice. thanks!
 
Last edited:
Does anyone have an idea on how much stuff is going on in the background with regards to the Class A transactions?

1663919480061.webp

It's not a busy site, and the number of Class A transactions seems pretty high considering the traffic the site gets.

1663919537662.webp
 
could be related to oembed cache that also goes into internal_data folder. not sure if there is a config option that change the location to host server.
 
Does anyone have an idea on how much stuff is going on in the background with regards to the Class A transactions?

View attachment 273751

It's not a busy site, and the number of Class A transactions seems pretty high considering the traffic the site gets.

View attachment 273752

could be related to oembed cache that also goes into internal_data folder. not sure if there is a config option that change the location to host server.
PHP:
// XenForo Settings - Path Configuration
    'codeCachePath'     => '/home/{..}/local_cache/code_cache',
    'tempDataPath'      => '/home/{..}/local_cache/temp_data',
make sure to create the folders, make them writable, and make sure they are denied similar to internal_data in your webserver.
 
i do have these lines in my config from the days i was testing s3/b2.

Code:
$config['codeCachePath'] = 'code_cache';
$config['tempDataPath'] = 'temp';

these are probably from some post earlier in this thread itself. i created the folders in base folder. i assume that's the default location if full path is not provided. haven't noticed any error for years now. if these are indeed being used, oembed cache folder still appeared on r2 for me when i tested it earlier today.
 
i do have these lines in my config from the days i was testing s3/b2.

Code:
$config['codeCachePath'] = 'code_cache';
$config['tempDataPath'] = 'temp';

these are probably from some post earlier in this thread itself. i created the folders in base folder. i assume that's the default location if full path is not provided. haven't noticed any error for years now. if these are indeed being used, oembed cache folder still appeared on r2 for me when i tested it earlier today.
oEmbed is not really the issue with the increased Class A; this is caused by the code cache and temp data being stored in s3 versus locally on server.
 
oEmbed is not really the issue with the increased Class A; this is caused by the code cache and temp data being stored in s3 versus locally on server.
I don't see code_cache or temp being used in R2, they are still locally present in internal_data on the local server.
 
Back
Top Bottom