XenForo with Vultr Object Storage

frm

Well-known member
All add ons disabled.

Upload attachment.
1727785367862.webp

No preview for the upload.

Post thread.

1727785481041.webp

Click on attachment

1727785505851.webp

Is this a XF bug or an object storage problem where Vultr isn't supported?

Config seems to work:

PHP:
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'XXX',
         'secret' => 'YYY'
      ],
      'region' => 'ams1',
      'version' => 'latest',
      'endpoint' => 'https://ams1.vultrobjects.com'
   ]);
};
$config['fsAdapters']['data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'dev-XXX', 'data');
};
$config['externalDataUrl'] = function($externalPath, $canonical)
{
   return 'https://XXX.ams1.vultrobjects.com' . $externalPath;
};
$config['fsAdapters']['internal-data'] = function() use($s3)
{
   return new \League\Flysystem\AwsS3v3\AwsS3Adapter($s3(), 'dev-XXX', 'internal_data');
};

The bucket filled, and I'm unsure what share-this.webp would be named to confirm that it's coming from the bucket and not the server.

But, it's not showing on upload or as an attachment.

If I edit the post and insert the full image into the post, it shows up.

1727785847453.webp

It's also blank in the ACP > Content > Attachments

1727785902721.webp

Except when I click share-this.webp, the attachment shows up.

1727785954610.webp


Edit: You can also not set avatars. After the filename appears on upload, it disappears and says Choose file again.
 
Last edited:
All seemingly works well now.
I'd use separate buckets for data and internal-data and make sure that internal-data can only be accessed with credentials.

Reason
Standard XenForo code sets visibility public for all created objects which allows them to be accessed without credentials / signed requests.


Thumbnails for attachments created before 2.3.0 RC 5 are stored at data/attachments/<chunk>/<dataid>-<filehash>.jpg while the full attachment is stored at internal_data/<chunk>/<dataid>-<filehash>.data.
So it is easy to guess the URL for the full attachment from the thumbnail URL and access the full attachment directly (eg. without permission checks) in this case.
 
Last edited:
@Kirby, are you aware if you can "mirror" object storage for redundancy in two different locations by essentially doubling $s3 = function() in the same config so that it would write to at least both ams1/ams2 and allow the option to switch to read from ams1 to ams2 if ams1 went down for maintenance (a Vultr maintenance notice came up stating it'd be down for a bit, so with that notice, I could still write/read from ams2 if ams1 went down and then copy ams2 to ams1 when it came back online)?

PHP:
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'XXX',
         'secret' => 'YYY'
      ],
      'region' => 'ams1',
      'version' => 'latest',
      'endpoint' => 'https://ams1.vultrobjects.com'
   ]);
};
// other important code
// new $s3 = function for ams2
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'XXX',
         'secret' => 'YYY'
      ],
      'region' => 'ams2',
      'version' => 'latest',
      'endpoint' => 'https://ams2.vultrobjects.com'
   ]);
};
 
@Kirby, are you aware if you can "mirror" object storage for redundancy in two different locations by essentially doubling $s3 = function() in the same config so that it would write to at least both ams1/ams2 and allow the option to switch to read from ams1 to ams2 if ams1 went down for maintenance (a Vultr maintenance notice came up stating it'd be down for a bit, so with that notice, I could still write/read from ams2 if ams1 went down and then copy ams2 to ams1 when it came back online)?

PHP:
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'XXX',
         'secret' => 'YYY'
      ],
      'region' => 'ams1',
      'version' => 'latest',
      'endpoint' => 'https://ams1.vultrobjects.com'
   ]);
};
// other important code
// new $s3 = function for ams2
$s3 = function()
{
   return new \Aws\S3\S3Client([
      'credentials' => [
         'key' => 'XXX',
         'secret' => 'YYY'
      ],
      'region' => 'ams2',
      'version' => 'latest',
      'endpoint' => 'https://ams2.vultrobjects.com'
   ]);
};
@Chris D - is it possible to mirror data to more than one object storage location with something like the above? And if so, in the instance a bucket doesn't resolve, would it request from the 2nd bucket, later mirroring when it comes back online?
 
Only the last one defined would take effect. Why would you want to do this?
Redundancy.

One bucket goes offline (say the Netherlands for maintenance; it's happened) with the US bucket online to still serve content.

Also, perhaps, for geo-location bucket accessing. Grab from the bucket closest to the user for faster access speeds.
 
Here's an example of a bucket in Amsterdam:

From the US, they load slowly. But, if I get on an Amsterdam VPN, they are a bit quicker (a slight delay because of the VPN connection, but they do load a few milliseconds quicker). I suspect they would load quicker for someone in the UK than in the US.

Having 2 buckets (or more) could give a better user experience by using geo-location, and serving the appropriate bucket for them to have images, etc., load quicker for them.

More importantly, though, is a redundant backup that can be used if one bucket goes down for whatever reason.

I suppose what I'm suggesting would require a suggestion to expand support for mirroring and geo-location serving.
 
Back
Top Bottom