[DBTech] DragonByte Optimise

[DBTech] DragonByte Optimise [Paid] 3.4.2

No permission to buy ($49.95)
The theme loads the custom XF CSS templates within 'ad_above_content'.
I don't think that's the right way to use the ad templates, isn't there an EXTRA.css template specifically for this purpose? I would recommend you change your template to instead of adding new CSS templates and loading them in an odd way, you place your extra CSS in the EXTRA.css template.

Without access to their admin or configuration, would you be able to recommend any steps to check for this?
This mod doesn't do anything other than parse the calls to css.php and transforming them into hard-coded files, so I'm going to assume this is down to the odd way you're loading your CSS.


Fillip
 
My XF1 dev environment is currently not accessible so I am not able to look into this further at the moment. I'd suggest disabling each cache type in the options and re-enabling them one-by-one until you find the one causing the issue.


Fillip
 
Just installed Optimise last night because I've been feeling Xenforo slow as of late on my site, the rest of the site flies but not the forum area.

With the slow queries being reported in the admin panel, I can now see there are a ton of 1+ second slow queries (over 2000 in less than 12 hours). My question is the following. I've noticed two patterns that you may be familiar with:

1) The stack trace part of the log, always reads like:
SELECT session_data
FROM xf_session
WHERE session_id = ?
AND expiry_date >= ?

Is that common, or is it telling me where the issue lies?

2) A lot of slow queries (not a majority, but many) are coming from URL "/community/login/csrf-token-refresh

Hope you can help.

Server Error Logs | Admin CP - TechSpot Forums 2017-09-11 11-54-38.webp
 
Nevermind, got this fixed. Made some config changes and optimizations. According to your add-on you saved me 825k queries on the last 24 hours alone. Does recording this stat imply any kind of resource consumption?
 
Last edited:
Neverming, got this fixed. Made some config changes and optimizations. According to your add-on you saved me 825k queries on the last 24 hours alone. Does recording this stat imply any kind of resource consumption?
My apologies for missing your original post, I never received a notification of your reply :(

If you have a very, very active forum and every page load ~5 queries get saved, that statistic could absolutely be accurate :)


Fillip
 
@DragonByte Tech When happen XF2 Support update?
There's currently no plans to begin work on an XF2 version, as not only does XF2 implement a lot of its functionality in the core, but I also need more time to familiarise myself with XF2.

Furthermore, since DB Optimise is coded using 100% native XF1 code, instead of the middleware our other XF1 mods use, an XF2 version would probably not be bundled with the XF1 download in the same way our other XF products do it.


Fillip
 
There's currently no plans to begin work on an XF2 version, as not only does XF2 implement a lot of its functionality in the core, but I also need more time to familiarise myself with XF2.

Furthermore, since DB Optimise is coded using 100% native XF1 code, instead of the middleware our other XF1 mods use, an XF2 version would probably not be bundled with the XF1 download in the same way our other XF products do it.


Fillip

Please let me know that this add-on database table name? If we choose to uninstall a legacy add-on from XF2 and any database alterations will remain. Database table need to remove manually.

Read more here: https://xenforo.com/community/threads/xf1-add-ons-uninstalling.138080/
 
Table: xf_dbtech_optimise_statistics

Also, data in xf_content_type and xf_content_type_field (content_type = dbtech_optimise_queries)


Fillip
 
Hello, I think there is a problem with a Dragon-Byte site itself. I was trying to buy this add-on and getting infamous "In order to accept POST requests originating from this domain, the admin must add the domain to the whitelist." at checkout, same when trying to submit a ticket :) This all happening in Firefox, but when I am in Chrome I just can't login at all - password/user name not found.
 
Hello, I think there is a problem with a Dragon-Byte site itself. I was trying to buy this add-on and getting infamous "In order to accept POST requests originating from this domain, the admin must add the domain to the whitelist." at checkout, same when trying to submit a ticket :) This all happening in Firefox, but when I am in Chrome I just can't login at all - password/user name not found.
That sounds like you have malware on your computer, and it's trying to initiate a cross-site scripting request. I would recommend attempting this from another computer, as this is not a problem we've had before.

EDIT: I'll be rebooting the server to apply a kernel update, so if the site appears down for a little while, that's why :)


Fillip
 
@DragonByte Tech not entirely sure if it's this addon that is causing mysql lock wait timeouts but if i disable redis caching and set to none, the lock wait timeout issues disappear for me. I have @Xon multi-prefix addon latest installed too

lock-wait-timeout-redis-01.webp

Error Info
Zend_Db_Statement_Mysqli_Exception: Mysqli statement execute error : Lock wait timeout exceeded; try restarting transaction - library/Zend/Db/Statement/Mysqli.php:214
Generated By: Unknown Account, 1 minute ago
Stack Trace
#0 /library/Zend/Db/Statement.php(297): Zend_Db_Statement_Mysqli->_execute(Array)
#1 /library/Zend/Db/Adapter/Abstract.php(479): Zend_Db_Statement->execute(Array)
#2 /library/XenForo/Model/DataRegistry.php(164): Zend_Db_Adapter_Abstract->query('\r\n\t\t\tINSERT INT...', Array)
#3 /library/DBTech/Optimise/XenForo/Model/DataRegistry.php(124): XenForo_Model_DataRegistry->set('PrefixesThreads...', Array)
#4 /library/PrefixForumListing/Model/PrefixListing.php(115): DBTech_Optimise_XenForo_Model_DataRegistry->set('PrefixesThreads...', Array)
#5 /library/PrefixForumListing/Extend/ControllerPublic/Forum.php(159): PrefixForumListing_Model_PrefixListing->updateThreadCountCache(34, Array)
#6 /library/Andy/NewContentLimit/ControllerPublic/Forum.php(8): PrefixForumListing_Extend_ControllerPublic_Forum->actionForum()
#7 /library/CTA/FeaturedThreads/ControllerPublic/Forum.php(215): Andy_NewContentLimit_ControllerPublic_Forum->actionForum()
#8 /library/DailyStats/ControllerPublic/Forum.php(53): CTA_FeaturedThreads_ControllerPublic_Forum->actionForum()
#9 /library/SV/MultiPrefix/XenForo/ControllerPublic/Forum.php(20): DailyStats_ControllerPublic_Forum->actionForum()
#10 /library/XenForo/FrontController.php(369): SV_MultiPrefix_XenForo_ControllerPublic_Forum->actionForum()
#11 /library/XenForo/FrontController.php(152): XenForo_FrontController->dispatch(Object(XenForo_RouteMatch))
#12 /index.php(13): XenForo_FrontController->run()
#13 {main}
Request State
array(3) {
["url"] => string(77) "https://domain.com/forums/forum-software-usage.34/?prefix_id=14"
["_GET"] => array(2) {
["/forums/forum-software-usage_34/"] => string(0) ""
["prefix_id"] => string(2) "14"
}
["_POST"] => array(0) {
}
}
ErrorException: Fatal Error: Maximum execution time of 120 seconds exceeded - library/DBTech/Optimise/XenForo/Model/Thread.php:114
Generated By: Unknown Account, 14 minutes ago
Stack Trace
#0 [internal function]: XenForo_Application::handleFatalError()
#1 {main}
Request State
array(3) {
["url"] => string(45) "https://domain.com/deferred.php"
["_GET"] => array(0) {
}
["_POST"] => array(0) {
}
}
Error Info
ErrorException: Fatal Error: Maximum execution time of 120 seconds exceeded - library/DBTech/Optimise/Operator/Redis.php:46
Generated By: Unknown Account, Today at 6:12 AM
Stack Trace
#0 [internal function]: XenForo_Application::handleFatalError()
#1 {main}
Request State
array(3) {
["url"] => string(45) "https://domain.com/deferred.php"
["_GET"] => array(0) {
}
["_POST"] => array(0) {
}
}
Error Info
ErrorException: Fatal Error: Maximum execution time of 120 seconds exceeded - library/Zend/Http/Client.php:1177
Generated By: Unknown Account, 6 minutes ago
Stack Trace
#0 [internal function]: XenForo_Application::handleFatalError()
#1 {main}
Request State
array(3) {
["url"] => string(45) "https://domain.com/deferred.php"
["_GET"] => array(0) {
}
["_POST"] => array(0) {
}
}

Code:
mysqladmin proc
+-------+----------------+-----------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+----------+
| Id    | User           | Host      | db   | Command | Time | State  | Info                                                                                                 | Progress |
+-------+----------------+-----------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+----------+
| 24607 | username        | localhost | DATABASENAME | Sleep   | 0    |        |                                                                                                      | 0.000    |
| 24809 | username        | localhost | DATABASENAME | Execute | 12   | update | INSERT INTO xf_data_registry
                                (data_key, data_value)
                        VALUES
                                (?, ?)
                        ON DUPLICATE KEY | 0.000    |
| 24824 | username        | localhost | DATABASENAME | Execute | 4    | update | INSERT INTO `xf_attachment` (`data_id`, `temp_hash`, `unassociated`, `content_type`, `content_id`, ` | 0.000    |
| 24835 | root           | localhost |      | Query   | 0    |        | show processlist                                                                                     | 0.000    |
+-------+----------------+-----------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+----------+
Code:
mysql -e "show engine innodb status \G;"

------------
TRANSACTIONS
------------
Trx id counter 25548584
Purge done for trx's n:o < 255482BB undo n:o < 0
History list length 2167
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 24754, OS thread handle 0x7f5b3dfb5700, query id 3401475 localhost root
show engine innodb status
---TRANSACTION 25548330, ACTIVE 51 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 18 lock struct(s), heap size 3112, 7 row lock(s), undo log entries 8
MySQL thread id 24642, OS thread handle 0x7f5b42302700, query id 3243633 localhost username update
INSERT INTO xf_data_registry
                                (data_key, data_value)
                        VALUES
                                (?, ?)
                        ON DUPLICATE KEY UPDATE
                                data_value = VALUES(data_value)
Trx read view will not see trx with id >= 25548331, sees < 25548277
------- TRX HAS BEEN WAITING 51 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 41 page no 431 n bits 80 index `PRIMARY` of table `DATABASENAME`.`xf_data_registry` trx id 25548330 lock_mode X locks rec but not gap waiting
------------------
---TRANSACTION 25548277, ACTIVE 65 sec
17 lock struct(s), heap size 3112, 194 row lock(s), undo log entries 37
MySQL thread id 24607, OS thread handle 0x7f5b42394700, query id 3401473 localhost username
Trx read view will not see trx with id >= 25548278, sees < 25548278
----------------------------
END OF INNODB MONITOR OUTPUT
============================

Code:
php -v
PHP 7.1.15 (cli) (built: Mar  2 2018 00:58:19) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2018 Zend Technologies
    with Zend OPcache v7.1.15, Copyright (c) 1999-2018, by Zend Technologies

Code:
php --ri redis

redis

Redis Support => enabled
Redis Version => 3.1.6
Available serializers => php
 
Last edited:
I think it is PrefixForumListing causing the inserts which are locking. Something is probably wrapping it in a larger than expected transaction.
 
cheers @Xon i enabled DB Optimise slow query logging too and hit these 2 on creating a new thread

Code:
XenForo_Exception: Slow Query: 96.4741 seconds, /forums/nginx-and-php-fpm-news-discussions.18/ - library/DBTech/Optimise/XenForo/EventListener/FrontController.php:221
Generated By: bassie, 23 minutes ago
Stack Trace
      INSERT INTO xf_data_registry
        (data_key, data_value)
      VALUES
        (?, ?)
      ON DUPLICATE KEY UPDATE
        data_value = VALUES(data_value)
  
Request State
array(3) {
  ["url"] => string(78) "https://domain.com/forums/nginx-and-php-fpm-news-discussions.18/"
  ["_GET"] => array(1) {
    ["/forums/nginx-and-php-fpm-news-discussions_18/"] => string(0) ""
  }
  ["_POST"] => array(0) {
  }
}
Code:
XenForo_Exception: Slow Query: 68.5029 seconds, /forums/nginx-and-php-fpm-news-discussions.18/add-thread - library/DBTech/Optimise/XenForo/EventListener/FrontController.php:221
Generated By: bassie, 23 minutes ago
Stack Trace
            insert xf_sv_user_lastpost (user_id, last_post_date)
                select ?, ?
            on duplicate key update
                last_post_date = values(last_post_date)
      
Request State
array(3) {
  ["url"] => string(88) "https://domain.com/forums/nginx-and-php-fpm-news-discussions.18/add-thread"
  ["_GET"] => array(1) {
    ["/forums/nginx-and-php-fpm-news-discussions_18/add-thread"] => string(0) ""
  }
  ["_POST"] => array(14) {
    ["prefix_id"] => array(1) {
      [0] => string(1) "5"
    }
    ["title"] => string(36) "Optimized Nginx by Google developers"
    ["message_html"] => string(598) "<p>The repo below is webserver Nginx maintained by Google developers.</p><p>In short Nginx with enhancement patches.</p><p><br></p><p>The configure file is located in the auto folder.</p><p><br></p><p>https://nginx.googlesource.com/nginx/</p><p><br></p><p>[QUOTE]This repository is currently maintained by Google developers.</p><p>Any code changes should be submitted to upstream <a href="https://nginx.org/en/docs/contributing_changes.html" target="_blank" class="externalLink ProxyLink" data-proxy-href="https://nginx.org/en/docs/contributing_changes.html" rel="nofollow">NGINX</a>.[/QUOTE]</p>
"
    ["_xfRelativeResolver"] => string(91) "https://domain.com/forums/nginx-and-php-fpm-news-discussions.18/create-thread"
    ["tags"] => string(5) "nginx"
    ["attachment_hash"] => string(32) "807d9ff08fe6915e8428233e757bd372"
    ["watch_thread"] => string(1) "1"
    ["watch_thread_email"] => string(1) "1"
    ["watch_thread_state"] => string(1) "1"
    ["poll"] => array(5) {
      ["question"] => string(0) ""
      ["responses"] => array(2) {
        [0] => string(0) ""
        [1] => string(0) ""
      }
      ["max_votes_type"] => string(6) "single"
      ["change_vote"] => string(1) "1"
      ["view_results_unvoted"] => string(1) "1"
    }
    ["_xfToken"] => string(8) "********"
    ["_xfRequestUri"] => string(59) "/forums/nginx-and-php-fpm-news-discussions.18/create-thread"
    ["_xfNoRedirect"] => string(1) "1"
    ["_xfResponseType"] => string(4) "json"
  }
}

Code:
Zend_Db_Statement_Mysqli_Exception: Mysqli statement execute error : Lock wait timeout exceeded; try restarting transaction - library/Zend/Db/Statement/Mysqli.php:214
Generated By: bassie, 34 minutes ago
Stack Trace
#0 /library/Zend/Db/Statement.php(317): Zend_Db_Statement_Mysqli->_execute(Array)
#1 /library/Zend/Db/Adapter/Abstract.php(479): Zend_Db_Statement->execute(Array)
#2 /library/XenForo/Model/DataRegistry.php(164): Zend_Db_Adapter_Abstract->query('\r\n\t\t\tINSERT INT...', Array)
#3 /library/DBTech/Optimise/XenForo/Model/DataRegistry.php(124): XenForo_Model_DataRegistry->set('PrefixesThreads...', Array)
#4 /library/PrefixForumListing/Extend/DataWriter/Discussion/Thread.php(44): DBTech_Optimise_XenForo_Model_DataRegistry->set('PrefixesThreads...', Array)
#5 /library/CTA/FeaturedThreads/DataWriter/Discussion/Thread.php(42): PrefixForumListing_Extend_DataWriter_Discussion_Thread->_discussionPostSave()
#6 /library/SV/MultiPrefix/XenForo/DataWriter/Discussion/Thread.php(66): CTA_FeaturedThreads_DataWriter_Discussion_Thread->_discussionPostSave()
#7 /library/XenForo/DataWriter/Discussion.php(467): SV_MultiPrefix_XenForo_DataWriter_Discussion_Thread->_discussionPostSave()
#8 /library/XenForo/DataWriter.php(1423): XenForo_DataWriter_Discussion->_postSave()
#9 /library/SV/DeadlockAvoidance/XenForo/DataWriter/Discussion/Thread.php(11): XenForo_DataWriter->save()
#10 /library/XenForo/ControllerPublic/Forum.php(830): SV_DeadlockAvoidance_XenForo_DataWriter_Discussion_Thread->save()
#11 /library/STO/XenForo/ControllerPublic/Forum.php(22): XenForo_ControllerPublic_Forum->actionAddThread()
#12 /library/LiamW/PostMacros/Extend/ControllerPublic/Forum.php(23): STO_XenForo_ControllerPublic_Forum->actionAddThread()
#13 /library/SV/MultiPrefix/XenForo/ControllerPublic/Forum.php(15): LiamW_PostMacros_Extend_ControllerPublic_Forum->actionAddThread()
#14 /library/XenForo/FrontController.php(369): SV_MultiPrefix_XenForo_ControllerPublic_Forum->actionAddThread()
#15 /library/XenForo/FrontController.php(152): XenForo_FrontController->dispatch(Object(XenForo_RouteMatch))
#16 /index.php(13): XenForo_FrontController->run()
#17 {main}
Request State
array(3) {
  ["url"] => string(88) "https://domain.com/forums/nginx-and-php-fpm-news-discussions.18/add-thread"
  ["_GET"] => array(1) {
    ["/forums/nginx-and-php-fpm-news-discussions_18/add-thread"] => string(0) ""
  }
  ["_POST"] => array(14) {
    ["prefix_id"] => array(1) {
      [0] => string(1) "5"
    }
    ["title"] => string(36) "Optimized Nginx by Google developers"
    ["message_html"] => string(487) "<p>The repo below is webserver Nginx maintained by Google developers.<br>In short Nginx with enhancement patches/code.</p><p><br></p><p>The configure file is located in the auto folder.</p><p><br></p><p>https://nginx.googlesource.com/nginx/<br></p><p><br></p><p>[QUOTE]This repository is currently maintained by Google developers.</p><p>Any code changes should be submitted to upstream <a href="https://nginx.org/en/docs/contributing_changes.html">NGINX</a>.[/QUOTE]</p><p></p>"
    ["_xfRelativeResolver"] => string(91) "https://domain.com/forums/nginx-and-php-fpm-news-discussions.18/create-thread"
    ["tags"] => string(5) "nginx"
    ["attachment_hash"] => string(32) "807d9ff08fe6915e8428233e757bd372"
    ["watch_thread"] => string(1) "1"
    ["watch_thread_email"] => string(1) "1"
    ["watch_thread_state"] => string(1) "1"
    ["poll"] => array(5) {
      ["question"] => string(0) ""
      ["responses"] => array(2) {
        [0] => string(0) ""
        [1] => string(0) ""
      }
      ["max_votes_type"] => string(6) "single"
      ["change_vote"] => string(1) "1"
      ["view_results_unvoted"] => string(1) "1"
    }
    ["_xfToken"] => string(8) "********"
    ["_xfRequestUri"] => string(59) "/forums/nginx-and-php-fpm-news-discussions.18/create-thread"
    ["_xfNoRedirect"] => string(1) "1"
    ["_xfResponseType"] => string(4) "json"
  }
}
 
Last edited:
Seems the locks disappeared for the 3+ days I disabled redis caching in DB Optimise with no errors at all. Then re-enabled redis caching and haven't see the issue since. From XF monthly stats, this month was busiest in terms of DB Optimise queries saved so maybe due to traffic ?

db-optimise-march-2016-2018-01.webp

But after turning back on redis caching it's been busier but not locks or server logged errors since.

db-optimise-march-weekly-2018-01.webp
 
Bug/issue (this also exists in your VB product).

When running the cache test, or flushing the cache, the product flushes the ENTIRE cache, not just it's own cache. I have several software packages using Redis on my servers, not just DB Optimise, and it ends up slowing down everything because some of this software might have several days worth of data wiped out from the cache. This issue exists with both Redis and Memcache, haven't tried the others.
 
Bug/issue (this also exists in your VB product).

When running the cache test, or flushing the cache, the product flushes the ENTIRE cache, not just it's own cache. I have several software packages using Redis on my servers, not just DB Optimise, and it ends up slowing down everything because some of this software might have several days worth of data wiped out from the cache. This issue exists with both Redis and Memcache, haven't tried the others.
That is not a bug.

First of all, it's important for you to realise that there is no such thing as "its own cache". The only way you can create a private cache for your forum that doesn't interfere with anything outside of your forum is if you are using a daemon-based cache like Redis or Memcache and you run two separate instances.
Obviously DB Optimise cannot access or flush caches that are not stored on the shard it's connected to, so if you need separate caches for your forum and for other software then this is your only option.

Secondly, the goal of the flush test (or the flush action), is to flush the entire cache (bearing in mind that there is no such thing as "its own cache"). It would make absolutely no sense whatsoever for it to function any other way.
If the test didn't flush the cache, then it would not be a valid test for whether clicking "Flush cache" actually flushes the cache. It is also impossible for the software to determine just what part of the cache you want to flush, so the "Flush cache" link can't delete only certain items, and it would not be very user friendly to ask the user for the cache keys to delete.

If you disagree with either of these assessments, then please feel free to link me to the PHP API documentation for how to connect to private caches for Redis, XCache, Memcache, ACPu and WinCache, and I will implement the functionality in the product as soon as possible :)


Fillip
 
That's why I put bug/issue at the top, not necessarily a bug. :)

Redis allows assigning a database number (default is 16 databases, with the default being 0). Its one of the reasons (besides persistence to disk) I use it over memcache.

With Redis, your program is doing a flushall, which flushes all databases, rather than a flushdb. The PHP Redis extension has the select function in the Redis class which allows selecting a database number (otherwise the default 0 is selected). DB's code does not use select to select a database.

I am requesting that:
1. Add database selection to the configuration file options.
2. Use flushdb to flush that database, or alternatively use "scan" to scan for keys matching the key prefix set in the admincp, and delete those keys.
Or simply leave the code as is and do a flushdb instead of flushall (I can change the other app's database to something other than 0).

I have a client with a large Magento Redis cache on my server, and sometimes as much as 2GB of cache is getting wiped out. Mod_pagespeed is using Redis too, but it's not as impacted.

Memcache has no simple way of doing it, other than loading the name of all keys and deleting those with a matching prefix. I don't know much about the other caching types, never used them. Currently, I only use Redis due to persistence & multiple databases.

I understand this is probably low priority, and not an end of the world situation for me, but it would be a nice change :)
 
Top Bottom