90 million posts - I'll take the suggestion of forced - will highlight the tables that failed, which I can recreate from the schema
And "--opt" - it doesn't do what you think it does, and is implemented as a default option anyway so you never have to specify it
For starters the command should be more like mysqldump -h 127.0.0.1 -u root --password=xxxxxx --add-drop-table --create-options --disable-keys --extended-insert --quick--set-charset --default-character-set=utf8mb4 --single-transaction --skip-lock-tables --hex-blob .........
But this isn't really...
"Volatile" tables allegedly include:
xf_thread_view
xf_thread_view_daily
xf_search_index
xf_search_log
xf_session
xf_session_activity
xf_session_admin
xf_image_proxy
xf_link_proxy
Its alleged that Xenforo will "recreate" them. However I might back them up without data and simply recreate them
hey, usually when I try to do a mysqldump on my very large xf (2.2.15) database I get errors on certain tables causing the dump to fail. Errors are usually DDL being changed on tables like xf_table_view. Asking copilot, its response is that I should ignore that table and nine others in the dump...
+1 for this - would make maintenance a a lot easier as I sometimes script my add-on upgrades, this would make it super easy to generate the script - currently a cumbersome process and prone to error
Yeah for sure, R2 would be your best bet. The problem for you with S3 is that unless you are hosting your site on AWS, you will end up paying a lot for egress traffic as your internal data wont be cached between storage and web servers.
Well I dont know what you have done - just did a bit of testing and list-objects is honoured by R2, as is list-objects-v2 so need to update the AWS SDK. I take it the endpoint above has been modified
This isn't strictly true, they are different for a reason. list-types is valid only on listObjectsV2, the fact that it doesn't recognise it suggests that R2 isn't quite right - as that is the thing that really differentiates the two APIs see...
This isn't something that XenForo should fix, at least not yet. The issue lies with the underlying third party FlySystem library. A PR exists to fix this issue (https://github.com/thephpleague/flysystem-aws-s3-v3/pull/298) and when that gets merged and incorporarted, then yeah, XF should update...
t4g shouldn't affect it - that only reduces your available CPU if you go too heavy. Is this external or internal data? Sounds like a caching issue either way. If using cloudfront to serve externally, you can invalidate the item, but I would (and already have) switched to using CloudFlare. As for...
Couple of ways -
1. Have separate buckets for external and internal data and block all public access on the internal bucket - internal and external data locations are defined separately already in config.php
2. Use the same bucket, but use a prefix, such as 'internal_data', in your definition...
Not stupid - by going via Cloudfront you can use OAI to keep items in the bucket private and not run it in 'website' mode. Cloudfront redudant? Technically yes. It may keep only a single copy of you file (which incidentally it stores unencrypted on disk), but if that is lost, it just grabs...