XF 1.3 Database max connections reached

Hi guys,
We've been having a problem where our database suddenly gets slammed and reaches it's max connection limit in the span of 5 minutes. It goes from ~20 connections to ~2500 in minutes. This could be the result of several things (like DDOS) but we are trying to investigate from a couple of different angles.

Is there anything Xenforo-wise that would possibly slam the database like this in a short amount of time? We have a ton of users and large tables so I was wondering if cron jobs or the new deferred system would possibly try running a bunch of queries at once (we just upgraded and are now seeing these issues).

Thanks in advance!
Caleb
 
Upping the size of the max_heap_table_size variable to 500MB seems to have fixed it. The issue we were having is the table size only can grow to about 1/3 of the size of that variable. Setting it to 100MB meant the table would have a limit of around ~33MB. The other 2/3 went to the index and some overhead. We also found a bunch of queries trying to log session data for empty userIds that we disabled for now. Either way we have stabilized and are looking much better on the db side.
 
If you're running a very busy system, you may see benefits from taking out the deferred.php user trigger (it's a class on the <html> tag) and just calling it once per minute or so via a real cron task.
Is it on the roadmap to have that within /admin.php?options/ for those that don't want it overwritten during upgrades and have access to real cron?
 
Last edited:
Elasticache works fine for us,

Assuming you are using RDS? What class DB? What db paarmeter modifications have you got in place?

Also are you using provisioned iops? If so how many, have you checked the cloudwatch stats for the service?
 
Sounds like you are on a m3.x2large. You should have at 4000 dedicated iops, The following paramater group changes work for us (we are on a m3.xlarge)
binlog_cache_size 65536
innodb_lock_wait_timeout 90
max_allowed_packet 67107840
max_heap_table_size 67108864 (you could probably double this)
query_cache_limit 12582912
query_cache_size 134217728 (you can double this)
order_buffer_size 8388608 (you should double this)
table_open_cache 600
tmp_table_size 67108864 (maybe double this)
 
Last edited:
We do have memcached enabled via Elasticache in AWS. Perhaps it's not properly storing things if it's trying to write to the db?
I took that as records in xf_session were still hitting the DB. xf_session_activity does still hit the DB even with sessions being cached. (If xf_session is hitting the DB, make sure you have the cache specificed in config.php and that cacheSessions is enabled.)
 
I took that as records in xf_session were still hitting the DB. xf_session_activity does still hit the DB even with sessions being cached. (If xf_session is hitting the DB, make sure you have the cache specificed in config.php and that cacheSessions is enabled.)
Ah okay. It seemed to be primarily xf_session_activity hitting the DB. It's been looking pretty good lately so I'm assuming things are okay.

Elasticache works fine for us,

Assuming you are using RDS? What class DB? What db paarmeter modifications have you got in place?

Also are you using provisioned iops? If so how many, have you checked the cloudwatch stats for the service?

I'll PM you with some info.
 
Top Bottom