1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

XF 1.3 Database max connections reached

Discussion in 'Troubleshooting and Problems' started by caleb89taylor, Sep 24, 2014.

  1. caleb89taylor

    caleb89taylor Member

    Hi guys,
    We've been having a problem where our database suddenly gets slammed and reaches it's max connection limit in the span of 5 minutes. It goes from ~20 connections to ~2500 in minutes. This could be the result of several things (like DDOS) but we are trying to investigate from a couple of different angles.

    Is there anything Xenforo-wise that would possibly slam the database like this in a short amount of time? We have a ton of users and large tables so I was wondering if cron jobs or the new deferred system would possibly try running a bunch of queries at once (we just upgraded and are now seeing these issues).

    Thanks in advance!
  2. JulianD

    JulianD Well-Known Member

    Try to log your slow queries and see if you can start to isolate the problem from there. Good luck.
  3. Mike

    Mike XenForo Developer Staff Member

    If you're running a very busy system, you may see benefits from taking out the deferred.php user trigger (it's a class on the <html> tag) and just calling it once per minute or so via a real cron task. That said, I don't think that would explicitly trigger it.

    Are the connections doing/waiting on anything in the process list?
  4. caleb89taylor

    caleb89taylor Member

    Hm. Just saw the error 'The table 'xf_thread_view' is full' in the database logs. That doesn't seem good.
  5. Mike

    Mike XenForo Developer Staff Member

    That's a memory table by default so it can theoretically fill up, but it's normally cleared every hour and it only contains a single 4 byte column. That makes me think that the deferred tasks aren't actually running.

    Alternatively, that could point to a potential DDoS if it is getting filled up "legitimately".
  6. caleb89taylor

    caleb89taylor Member

    Memory table? And is there anyway to verify deferred tasks are running via some log?
  7. Mike

    Mike XenForo Developer Staff Member

    Memory table: http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html It doesn't have to be MEMORY, but it's done this way to avoid hitting the disk on every thread view (same as xf_session_activity). There is a default limit of 16M.

    In the current code, the act of checking whether deferred tasks need to be run would probably end up causing them to run (as they are run in the ACP and that doesn't have the customizations that normally block them on the front end). As such, you may need to look at the next run time for cron entries directly in the database; if the times for a number of them are in the past, then the deferred system likely isn't being triggered correctly.
  8. caleb89taylor

    caleb89taylor Member

    Ah thanks for the info (sorry not a mysql expert). So that makes a lot of sense. I'll check the cron jobs in the database to see if there is an issue there.

    I work at IGN and we have a very large forum so xf_thread_view filling up with default MySql values is a distinct possibility. From reading the documentation you posted, memory tables will never be converted to disk tables. So errors will be logged but it shouldn't slow down the database a ton as we aren't going to be hitting disk with each thread view.
  9. Mike

    Mike XenForo Developer Staff Member

    True, sort of. If the query triggers an error immediately (which it may not if it's INSERT DELAYED), it may block the thread view and cause XF to log it (to disk). If it isn't blocking the thread view and XF isn't logging it in the error log in the control panel, then it shouldn't interfere with XF itself.
  10. caleb89taylor

    caleb89taylor Member

    Yeah I'm thinking you're right about the logging part. I'm confirming it now but it looks like it was logging each time it tried and the table is full.

    Our current max_heap_table_size is 33MB. Which is larger than the default but may not be enough for how many users we have? I'm still not 100% a cron job didn't fail but I'm hoping bumping the size of the table up will potentially solve this problem. Running a query to determine the size of the xf_thread_view table keeps giving 7.99 MB. So I'm not sure why the xf_thread_view table isn't 33MB like it should be according to the documentation.
  11. caleb89taylor

    caleb89taylor Member

    Db just crashed again. This query was shown as being run a ton of times via different connections:

    INSERT INTO xf_session_activity
    controller_name VALUES(controller_name),
    controller_action VALUES(controller_action),
    view_state VALUES(view_state),
    params VALUES(params),
    view_date VALUES(view_date),
    robot_key VALUES(robot_key)
  12. Tracy Perry

    Tracy Perry Well-Known Member

    I think I remember @MattW making a post over on the Tapatalk site inquiring why some IP's from them were slamming the crud out of his server. I haven't checked back lately over there to see if they ever responded. Had to do with scraping content through the app for their search interface for Tapatalk.

    Here's the thread: https://support.tapatalk.com/threads/tapatalk-bot.26316/#post-140069.
    SneakyDave likes this.
  13. caleb89taylor

    caleb89taylor Member

    Yeah we also had a bunch of normal xf_session_activity inserts as well. It wasn't just Tapatalk.
  14. rainmotorsports

    rainmotorsports Well-Known Member

    Your going to have a session activity record for every user, guest and robot performing an action on the site within a specified time. About 1.5 hours on our site is the oldest record I've seen. That's pretty normal and maybe trouble for a busy site. Its the thread views I'm finding interesting.
  15. caleb89taylor

    caleb89taylor Member

    We are going to up our heap_max_table_size global variable and see if that helps.
  16. Mike

    Mike XenForo Developer Staff Member

    What's the status of those queries? Are they waiting for a table lock?

    I don't think you guys are running 1.4 so this is less significant (but still valid), but you can run this query to change one of the index types to be more usable: https://xenforo.com/community/threa...ading-database-performance.82725/#post-823355

    Alternatively, flipping the table to InnoDB would allow row-level locks though at the expense of needing to hit the disk. (This row is updated on each view.)
  17. Jim Boy

    Jim Boy Well-Known Member

    have you run "show processlist" on mysql directly to see what the queries are? You could run show full processlist, but that could be very large.

    What about the performance of the DB, has it been tuned at all? Are you hitting a I/O block casuing the queries to bank up?

    Is there any other apps that use the database instance that might be hogging connections?
  18. caleb89taylor

    caleb89taylor Member

    MySql Workbench showed the status of a few queries we sampled as "update" so I don't think they were waiting on a lock but on an available connection to the db (since the initial issue was that the # of connections spiked to the max available for the db). We also noticed that it doesn't look like the xf_session_activity table doesn't appear to be a memory table. It has engine=InnoDb in it's create statement. We believe that it was changed by the previous engineers that worked on this. Possibly because we have too much traffic to do an in-memory table for xf_session_activity?

    There shouldn't be any other apps hitting the db as far as I know. The database doesn't appear to be hitting any IO/CPU/memory limits. Mainly just the max number of connections. But they spike so quickly that it seems like it can't be the result of normal traffic increase.
  19. Jim Boy

    Jim Boy Well-Known Member

    Being an InnoDb table it will require writing to disk on a very frequent basis which could be the cuase. The xf_session_activity table isn't all that big and isn't really an issue if you lose it, I'd put it back to Memory, which of course doesn't write to disk. If your hardware hasn't enough capacity to to have it in memory then it is probably lacking in general.

    Another sugestion is to use phpmyadmin, which has a good Advisory functionality - you click on status and then advisor. Use the latest version rather than a packaged version from your distro. Obviously make sure that access is tied down, preferably by ip address. We have a site that can be busy, eg 5000 page impressions a minute, and the database runs fine after I tuned it using the Advisor as a guide.
  20. Mike

    Mike XenForo Developer Staff Member

Share This Page