[DigitalPoint] App for Cloudflare®

[DigitalPoint] App for Cloudflare® 1.8.2

No permission to download
The xfmg folder is in the data directory, and the data is in that bucket

It's expecting the data in another bucket and it's not possible to change it to the data bucket because it's greyed out.
 
Last edited:
The xengallery folder is in the data directory, and the data is in that bucket

So I think this might be the problem, It's expecting the data in another bucket and it's not possible to change it to the data bucket because it's greyed out.
View attachment 297671
The bucket there is only for files that are in your local filesystem in the internal_data/xfmg/ folder. If you don’t have anything there, that bucket would not do anything. Don’t try to mix an internal_data bucket with a data bucket. That would literally be the same as you copying all the contents of your internal_data folder in your local filesystem into your data folder. Nothing good could come of it and you will break everything.

Again… internal_data is not the same folder as data and you trying to switch them will break XenForo (there is a reason the UI doesn’t allow it).

Find where XFMG is storing things in the local filesystem. If it’s anywhere in the data folder, then you are good. If it’s in the internal_data/attachment folder you are also good. If it’s in internal_data/xfmg, you are fine there too. All those locations are being handled by buckets from your screenshot.

You already said it’s in the data folder, which is handled by a bucket. So not sure why you are wanting to change it to an internal-data bucket?
 
The bucket there is only for files that are in your local filesystem in the internal_data/xfmg/ folder. If you don’t have anything there, that bucket would not do anything. Don’t try to mix an internal_data bucket with a data bucket. That would literally be the same as you copying all the contents of your internal_data folder in your local filesystem into your data folder. Nothing good could come of it and you will break everything.

Again… internal_data is not the same folder as data and you trying to switch them will break XenForo (there is a reason the UI doesn’t allow it).

Find where XFMG is storing things in the local filesystem. If it’s anywhere in the data folder, then you are good. If it’s in the internal_data/attachment folder you are also good. If it’s in internal_data/xfmg, you are fine there too. All those locations are being handled by buckets from your screenshot.

You already said it’s in the data folder, which is handled by a bucket. So not sure why you are wanting to change it to an internal-data bucket?

nternal_data/xfmg does not exist, data/xfmg does exist.
So that bucket does nothing and thats ok.

xfmg is storing data in data/xfmg
This bucket had a domain data.reisforum.net, the avatars use that. But I don't see that in xfmg.


I'm not trying to change the data location, I just trying to figure out what's going on. because I think the xfmg is loading images from the old location.

I think I will just rename the folder on my server and see if it still works. If it's pulling the images from the bucket it should still work.

update

It still works after renaming the folder on my server.
So in xfmg it's not using the data.reisforum.net domain.
That's why I was confused.
Glad it works now :cool:
 
Ya, I don’t really know much about XFMG. I don’t use it and don’t have a license for it. If it’s properly using XenForo’s abstracted filesystem (which I would bet money that it is), then it’s going to be using the data bucket on the backend if the files originally were in the data folder in the local filesystem.

Again, just to be clear… the data bucket covers everything in the local data folder. Doesn’t matter if it’s avatars, attachments, xfmg, abcxyz, etc. if it’s in the data folder, it’s handled by the data bucket.

Try uploading something new to XFMG and see if it lands in the bucket (can look for things via Cloudflare’s dashboard) or the local filesystem. You can also see things moving to/from the bucket on the R2 page of this addon by clicking one of the links that shows logs for class A and class B operations.
 
Ya, I don’t really know much about XFMG. I don’t use it and don’t have a license for it. If it’s properly using XenForo’s abstracted filesystem (which I would bet money that it is), then it’s going to be using the data bucket on the backend if the files originally were in the data folder in the local filesystem.

Again, just to be clear… the data bucket covers everything in the local data folder. Doesn’t matter if it’s avatars, attachments, xfmg, abcxyz, etc. if it’s in the data folder, it’s handled by the data bucket.

Try uploading something new to XFMG and see if it lands in the bucket (can look for things via Cloudflare’s dashboard) or the local filesystem. You can also see things moving to/from the bucket on the R2 page of this addon by clicking one of the links that shows logs for class A and class B operations.
It still works after renaming the xfmg folder on my server.
So in xfmg it's not using the data.reisforum.net domain.
That's why I was confused.
Glad it works now :cool:
Great add-on b.t.w
 
I vaguely remembering something about XFMG using that internal data folder for rare things (maybe it was for storing original images before they were watermarked or something). @Chris D would know better than I would if you are curious (I don’t have access to a XFMG license so I can’t be particularly helpful in answering what it does exactly or why it does it). 🤷🏻‍♂️
 
However, I’m not doing a backup of the R2 stuff myself. Every site is different of course, but for me, attachments and avatars aren’t worth the time/effort to protect against a user deleting something and then changing their mind later.
I was thinking about this, if looking for feature requests, an option to store in the filesystem as well as r2 would be great.

Can still serve from r2, but have a local copy for security (which then gets backed up by my normal schedule)
 
I was thinking about this, if looking for feature requests, an option to store in the filesystem as well as r2 would be great.

Can still serve from r2, but have a local copy for security (which then gets backed up by my normal schedule)
That's how I had it in the past with another add-on. And I really liked it.
I was also thinking about a backup. So that wold solve the need for a seperate backup.
 
I was thinking about this, if looking for feature requests, an option to store in the filesystem as well as r2 would be great.
R2 was designed for data durability and resilience and provides 99.999999999% (eleven 9s) of annual durability, which describes the likelihood of data loss.
For example, if you store 1,000,000 objects on R2, you can expect to lose an object once every 100,000 years, which is the same level of durability as other major providers.
 
I was thinking about this, if looking for feature requests, an option to store in the filesystem as well as r2 would be great.

Can still serve from r2, but have a local copy for security (which then gets backed up by my normal schedule)
It certainly wouldn't be impossible... you really would just need to have a League filesystem adapter that was doing the functions of the R2 adapter as well as the local adapter.

That being said, I'm not sure doing it at the application/adapter level would be the best approach for backups. It would be similar to running all SQL queries on multiple database servers so one can serve as a backup. You would be adding time for end users for those things to complete (whether they are extra SQL queries or writing to multiple filesystems when they upload an avatar or attachment). You also run into a situation where the potential for the two different filesystems to be out of sync as time goes on (for example maybe writing to one worked but the other failed for whatever reason).

I think a better wya to do it is run a cron task occasionally that syncs the source with the destination (sync will only transfer new/changed/deleted items similar to how rsync works). See:

That decouples the process from users (the site isn't taking longer to do things for end users) and you don't run into an issue of the backup being out of sync for whatever reason. It also gives you added flexibility... you could not only backup to the local filesystem of your server, you could do a backup to a ton of other options (offsite NAS volume, another S3 compatible cloud provider, a different R2 bucket, etc.)

It's mostly upside... the downside would be that it's not a realtime backup (if you ran the sync daily for example, something that was uploaded and then deleted shortly thereafter wouldn't be available to be restored if it was uploaded after the last sync.

If someone really wanted to do it, they could extend a few of the methods in the DigitalPoint\Cloudflare\League\Flysystem\Adapter\R2.php file... specifically the write operations to also write somewhere else. But I still think the sync option of rclone is going to have more advantages.

R2 was designed for data durability and resilience and provides 99.999999999% (eleven 9s) of annual durability, which describes the likelihood of data loss.
For example, if you store 1,000,000 objects on R2, you can expect to lose an object once every 100,000 years, which is the same level of durability as other major providers.
Ya, but that doesn't protect against user errors (like someone accidentally deleting something). So if someone wants to be able to restore something that a user deleted intentionally/accidentally, you would need a backup.
 
It certainly wouldn't be impossible... you really would just need to have a League filesystem adapter that was doing the functions of the R2 adapter as well as the local adapter.

That being said, I'm not sure doing it at the application/adapter level would be the best approach for backups. It would be similar to running all SQL queries on multiple database servers so one can serve as a backup. You would be adding time for end users for those things to complete (whether they are extra SQL queries or writing to multiple filesystems when they upload an avatar or attachment). You also run into a situation where the potential for the two different filesystems to be out of sync as time goes on (for example maybe writing to one worked but the other failed for whatever reason).

I think a better wya to do it is run a cron task occasionally that syncs the source with the destination (sync will only transfer new/changed/deleted items similar to how rsync works). See:

That decouples the process from users (the site isn't taking longer to do things for end users) and you don't run into an issue of the backup being out of sync for whatever reason. It also gives you added flexibility... you could not only backup to the local filesystem of your server, you could do a backup to a ton of other options (offsite NAS volume, another S3 compatible cloud provider, a different R2 bucket, etc.)

It's mostly upside... the downside would be that it's not a realtime backup (if you ran the sync daily for example, something that was uploaded and then deleted shortly thereafter wouldn't be available to be restored if it was uploaded after the last sync.

If someone really wanted to do it, they could extend a few of the methods in the DigitalPoint\Cloudflare\League\Flysystem\Adapter\R2.php file... specifically the write operations to also write somewhere else. But I still think the sync option of rclone is going to have more advantages.


Ya, but that doesn't protect against user errors (like someone accidentally deleting something). So if someone wants to be able to restore something that a user deleted intentionally/accidentally, you would need a backup.
You are right, it's not the best idea. Great explanation. And I think rclone is a very reasnable solution. I like that. I will have to look into that.

And indeed. It's mor for user errors, malware or hacking that I would like a copy.
 
Oh, I forgot to mention that automatic writing/updating/deleting to two filesystems makes it so you lose the advantage of being able to restore accidental deletions from backup (you accidentally delete something and it would be gone from both places).
 
Hey @digitalpoint - can you think of any possible issues with the statistics (in daily stats) being captured here that would cause the daily cronjob to fail about 25% of days with a PHP out of memory error?

Additionally, a rebuild of statistics via cli is taking ~10-20 seconds/day with the addon enabled, versus about 0.1s with it disabled.

My forum has ~300k threads, ~10mm posts with dates back to 2005 just to give an idea of scale.
 
I can't think of anything that would cause memory issues. It does run a few queries via Cloudflare API to get various daily stats, but none of them are doing anything like pulling a ton of records and summarizing them manually or anything (all the "heavy lifting" is done on Cloudflare's side), so your server is really just loading into memory pretty small bits of JSON data.

Even then it's really just daily summaries from Cloudflare... for example how many unique visitors that day, how many total requests, etc.) As I already mentioned, it's just the total numbers coming from Cloudflare, nothing like every request or visitor for the day and we are counting them (all the counting is happening before your server gets the data).

How much memory is your PHP setup to allow?
 
I can't think of anything that would cause memory issues. It does run a few queries via Cloudflare API to get various daily stats, but none of them are doing anything like pulling a ton of records and summarizing them manually or anything (all the "heavy lifting" is done on Cloudflare's side), so your server is really just loading into memory pretty small bits of JSON data.

Even then it's really just daily summaries from Cloudflare... for example how many unique visitors that day, how many total requests, etc.) As I already mentioned, it's just the total numbers coming from Cloudflare, nothing like every request or visitor for the day and we are counting them (all the counting is happening before your server gets the data).

How much memory is your PHP setup to allow?

Okay, so the api request to cloudflare likely explains the slowness in rebuilding the stats at least.

PHP memory_limit is 256M. It might very likely be something else in the daily stats causing the timeout - the increased processing time with the mod enabled just caused me to start going down this path as the first possibility.
 
Okay, so the api request to cloudflare likely explains the slowness in rebuilding the stats at least.

PHP memory_limit is 256M. It might very likely be something else in the daily stats causing the timeout - the increased processing time with the mod enabled just caused me to start going down this path as the first possibility.
Ya... it will definitely not be particularly fast for rebuilding stats (since it's a few API calls for each day), but it shouldn't be causing a memory issue.
 
Can still serve from r2, but have a local copy for security (which then gets backed up by my normal schedule)

And I think rclone is a very reasnable solution. I like that. I will have to look into that.
And indeed. It's mor for user errors, malware or hacking that I would like a copy.


I think a better wya to do it is run a cron task occasionally that syncs the source with the destination (sync will only transfer new/changed/deleted items similar to how rsync works). See:


Extending this a little further, and sharing, I use https://restic.net/ to backup both my local servers/db's and CF R2 to https://www.backblaze.com/cloud-storage. Backblaze is very cheap, and Restic takes care of the daily/monthly/yearly backup cycles.
 
Top Bottom