XenForo + caching = AMAZING

Not sure if this is an add-on coding issue or whether it's an OS issue
If this would be an OS/server config issue, your forum will be dead completely. What do you mean by "slaves"? If you have multiple servers, you should replicate the same setup everywhere. Just curious, did you do that? If you did not, is like trying to fit a square peg into a round hole.
 
I've fixed it for now - just changed session.serialize_handler = php in igbinary.ini

I'll have a chat with Nathan on his own support site for XenSSO - I'm sure we can sort it (or I'll just leave sessions serialised via PHP - I don't think they add a massive overhead do they?).

Cheers,
Shaun :D
 
If this would be an OS/server config issue, your forum will be dead completely. What do you mean by "slaves"? If you have multiple servers, you should replicate the same setup everywhere. Just curious, did you do that? If you did not, is like trying to fit a square peg into a round hole.

Sorry Floren, this relates to the add-on - the "slaves" are the 'other sites' that Nathan's add-on manages login sessions for. All the sites are on the same server. (y)
 
I've fixed it for now
That's not fixed, is called "half ass baked". :giggle:
You should never upgrade a live server, assuming things will work. And a developer should never release a product, without proper testing. Nathan knows my "wrath of fire" style, so he won't take it bad. Neither should you... :cautious:
 
That's not fixed, is called "half ass baked". :giggle:

Hey, it's working - that'll do for me. :D

You should never upgrade a live server, assuming things will work. And a developer should never release a product, without proper testing. Nathan knows my "wrath of fire" style, so he won't take it bad. Neither should you... :cautious:

Well, that's true, but I'm not yet in a position to have the luxury of a separate testing server. One day soon, hopefully. (y)

For a five year old server it's not doing too bad and I'm currently saving up for a new one, but in the meantime I'm learning a lot and with each bit of optimisation my sites get faster and my members are happier, so it's all good. Of course, I may live to regret it if I royally screw up the server <touches wood!>.

Thanks for your input - it's good to talk to people who understand the ins and outs and who can guide based on experience (rather than guesswork!).

Cheers,
Shaun :D
 
Well, that's true, but I'm not yet in a position to have the luxury of a separate testing server. One day soon, hopefully. (y)
A home computer that you don't use anymore will do just great, or just install VirtualBox with the OS you have online. I only have 1 server online and 4 I7 development servers at home, to test things. You should see how cozy it is in winter time with all of them running, I never heat the room. I do run the AC non stop right now though... :giggle:
 
That's not fixed, is called "half ass baked". :giggle:
You should never upgrade a live server, assuming things will work. And a developer should never release a product, without proper testing. Nathan knows my "wrath of fire" style, so he won't take it bad. Neither should you... :cautious:

Not everything can be caught with testing, especially not a session bug only occurring randomly on specific environments :) XenSSO has been rigorously tested and is actually very stable at the moment. I agree a product should be properly tested before release, but there isn't a single piece of software (10-line scripts excluded) out there that doesn't have a bug in it, that's just the nature of it all and until we invent AI it won't change.
 
I agree a product should be properly tested before release, but there isn't a single piece of software (10-line scripts excluded) out there that doesn't have a bug in it, that's just the nature of it all and until we invent AI it won't change.
The bug Shaun is referring, it due to a specific server setup you have? Either ways, sessions are handled in Xenforo through Zend, so that should eliminate any code errors related to that. If a Xenforo object reports no errors, neither any adjacent product should. At least that's how I see it, with my novice experience in Xenforo. But I agree that nothing is perfect, starting with myself for being such impossible guy. :giggle:
 
One thing I noticed in phpMemcachedAdmin is that Cache used never seems to go much above 35MB - yet the startup command line for Memcached server assigns 1GB - why is that?
 
I have Memcached memory usage set to 64MB and APC at 128MB, you can customize those values to your needs. 1GB is extreme, I agree. No idea how is done normally in memcached as I created my own custom rpm where I enter the settings into /etc/memcached.conf:
# Memcached Configuration Settings

# IP address to listen on
# the default value is INARR_ANY, any network interface
HOST="127.0.0.1"

# TCP port to listen on
# the default value is 11211
TCP_PORT="11211"

# UDP port to listen on
# Can be disabled by setting it to 0.
# the default value is 11211
UDP_PORT="11211"

# Unix socket path to listen on
# Using a socket will automatically disable networking support.
# the default value is /var/run/memcached/memcached.sock
SOCKET=""

# Client binding protocol
# available options: auto, ascii, binary
# the default value is auto
PROTOCOL="auto"

# Number of threads used to process incoming requests
# Not useful to set higher than the number of server CPU cores.
# the default value is 4
THREADS="8"

# Maximum memory to use for object storage
# the default value is 64 megabytes
MAX_BYTES="64"

# Maximum simultaneous connections
# the default value is 1024
MAX_CONNECTIONS="1024"

# Maximum sequential requests
# Prevents client starvation by setting a limit to the number
# of requests the server will process from a client connection.
# the default value is 20
MAX_REQUESTS="20"

# Multiplier factor for computing the size of item memory chunks
# the default value is 1.25
CHUNK_FACTOR="1.3"

# Minimum number of bytes for an item memory chunk
# the default value is 48 bytes
CHUNK_SIZE="48"

# Default size of each slab page
# Adjusting this value changes the item size limit, increases
# the number of slabs and overal memory usage.
# Choose a value between is 1 kilobyte and 128 megabytes.
# the default value is 1 megabyte
SLAB_SIZE="1m"

# Additional server options
OPTIONS="-o slab_reassign,slab_automove"
The service init script will load at startup only the values who are custom (not default):
/usr/bin/memcached -d -l 127.0.0.1 -p 11211 -U 11211 -u memcached -P /var/run/memcached/memcached.pid -t 8 -f 1.3 -o slab_reassign,slab_automove
 
I'm a little baffled by people using memcache on single server setups, when both Kier and DigitalPoint have clearly cited, APC shows far better caching performance on a single server. Memcache is for multiple server environments, where it functions better than APC.

Excluding MattW, who seems to be using multiple servers...
 
My understanding is that APC is a PHP code cache, whereas Memcached is a data and object cache.

I think the argument for not using it on a single server is that it doesn't offer a great deal of extra optimisation over straight disk access, whereas across two or more servers the cached data and objects then begin to pay dividends by reducing multiple IO operations.

I must confess to only having a redimentary understanding of these things and am driven by my geeky tendency to provide a fast experience for my members. :)

Cheers,
Shaun :D
 
My understanding is that APC is a PHP code cache, whereas Memcached is a data and object cache.

APC and Xcache can do both. I am currently using xcache as both opcode and data cache.

I have also heard memcached is for multiple server environments.
 
I'm a little baffled by people using memcache on single server setups, when both Kier and DigitalPoint have clearly cited, APC shows far better caching performance on a single server. Memcache is for multiple server environments, where it functions better than APC.
Don't be baffled, be amazed by better numbers and zero fragmentation when APC is combined with igbinary and Memcached. :giggle:
Honestly, Kier and Shawn are very knowledgeable but I'm sure they do not proclaim themselves as the "Internet Bible". I personally never read a post where they mentioned that "APC shows far better caching performance" on a single server setup, because it would be totally inaccurate. With today's modern technology, the bottlenecks we used to see in the past are long gone. Plus, the fact that you obtain zero fragmentation with the APC/Igbinary/Memcached setup is already a winner and scores better then APC alone. But don't take my words for granted, please do look at some numbers I just pulled... I ran a modest Siege test with 1500 concurrent users hitting the server at random intervals between 0 and 1000 milliseconds, for a period of 30 seconds.

Development Server Setup
CentOS 5.8 64bits I7-920 Intel, 8GB RAM, RAID1 7200RPMS disks
- MariaDB 5.2.12 (Axivo Intel optimized rpm)
- Nginx 1.2.2 (Axivo Intel optimized rpm)
- PHP 5.3.15 running on php-fpm daemon (Axivo Intel optimized rpm)
- APC 3.1.11 (Axivo Intel optimized rpm)
- Igbinary 1.1.1 (Axivo Intel optimized rpm)
- Memcached 1.4.14 (Axivo Intel optimized rpm)
- Libmemcached 2.1.0 (Axivo Intel optimized rpm)

Memcached/Igbinary/APC Burst Tests
Code:
$config['cache']['enabled'] = true;
$config['cache']['cacheSessions'] = true;
$config['cache']['frontend'] = 'Core';
$config['cache']['frontendOptions'] = array(
	'caching'			=> true,
	'cache_id_prefix'		=> 'xf_',
	'automatic_serialization'	=> true,
	'lifetime'			=> 0
);
$config['cache']['backend'] = 'Libmemcached';
$config['cache']['backendOptions'] = array(
	'servers'	=> array(
		array(
			'host'		=> '127.0.0.1',
			'port'		=> 11211,
			'weight'	=> 1
		)
	)
);
Code:
# siege -c 1500 -d 1 -t 30S -i -f /etc/siege/urls.txt
** SIEGE 2.72
** Preparing 1500 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
 
Transactions:                  83140 hits
Availability:                100.00 %
Elapsed time:                  29.19 secs
Data transferred:              15.45 MB
Response time:                  0.00 secs
Transaction rate:            2848.24 trans/sec
Throughput:                    0.53 MB/sec
Concurrency:                  13.12
Successful transactions:      86151
Failed transactions:              0
Longest transaction:            0.41
Shortest transaction:          0.00
 
# siege -c 1500 -d 1 -t 30S -i -f /etc/siege/urls.txt
** SIEGE 2.72
** Preparing 1500 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
 
Transactions:                  85345 hits
Availability:                100.00 %
Elapsed time:                  29.73 secs
Data transferred:              15.84 MB
Response time:                  0.00 secs
Transaction rate:            2870.67 trans/sec
Throughput:                    0.53 MB/sec
Concurrency:                  12.08
Successful transactions:      88373
Failed transactions:              0
Longest transaction:            3.00
Shortest transaction:          0.00

APC Only Burst Tests
Code:
$config['cache']['enabled'] = true;
$config['cache']['cacheSessions'] = true;
$config['cache']['frontend'] = 'Core';
$config['cache']['frontendOptions'] = array(
	'caching'			=> true,
	'cache_id_prefix'		=> 'xf_',
	'automatic_serialization'	=> true,
	'lifetime'			=> 0
);
$config['cache']['backend'] = 'Apc';
Code:
# siege -c 1500 -d 1 -t 30S -i -f /etc/siege/urls.txt
** SIEGE 2.72
** Preparing 1500 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
 
Transactions:                  81100 hits
Availability:                100.00 %
Elapsed time:                  29.36 secs
Data transferred:              15.08 MB
Response time:                  0.02 secs
Transaction rate:            2762.26 trans/sec
Throughput:                    0.51 MB/sec
Concurrency:                  61.14
Successful transactions:      84098
Failed transactions:              0
Longest transaction:            9.01
Shortest transaction:          0.00
 
# siege -c 1500 -d 1 -t 30S -i -f /etc/siege/urls.txt
** SIEGE 2.72
** Preparing 1500 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
 
Transactions:                  81445 hits
Availability:                100.00 %
Elapsed time:                  29.22 secs
Data transferred:              15.16 MB
Response time:                  0.01 secs
Transaction rate:            2787.30 trans/sec
Throughput:                    0.52 MB/sec
Concurrency:                  35.46
Successful transactions:      84544
Failed transactions:              0
Longest transaction:            3.15
Shortest transaction:          0.00
A quick comparison between all 4 tests shows that the Memcached/Igbinary/APC combo is clearly the winner, with better transaction, lower concurrency and higher number of hits. The numbers speak for themselfs:
Code:
        Date & Time,  Trans,  Elap Time,  Data Trans,  Resp Time,  Trans Rate,  Throughput,  Concurrent,    OKAY,  Failed
2012-08-11 09:27:43,  83140,      29.19,          15,      0.00,    2848.24,        0.51,      13.12,  86151,      0
2012-08-11 09:29:36,  85345,      29.73,          15,      0.00,    2870.67,        0.50,      12.08,  88373,      0
2012-08-11 09:36:43,  81100,      29.36,          15,      0.02,    2762.26,        0.51,      61.14,  84098,      0
2012-08-11 09:38:18,  81445,      29.22,          15,      0.01,    2787.30,        0.51,      35.46,  84544,      0
Please let me know your thoughts, related to above results. :)
 
APC and Xcache can do both.
It depends a lot on what web server you use. Compared to Xcache, the results are far superior with APC if you mix it with Nginx. Apache yields better results with Xcache, though. Personally, I obtain stellar results with Nginx and APC+Igbinary+Memcached.
 
FYI, I set my memcached at 100 MB, and in a XF forum with 1.2 million posts and relatively heavy traffic, it only used 40M after a month.

So I think Floren is right on with 40MB being a decent setup...or 100M if you want plenty of overhead.
 
Yeah, I think 1GB is definitely overkill for what I'm seeing Memcached use on my server so I've adjusted the start-up command to lower it considerably.

I've also just set the APC serialiser to igbinary - we'll see if that puts a stop to fragmentation. (y)
 
I honestly don't see any significant changes in those results, in the scheme of performance. There is a minimalist improvement, though then you are leaving out the overhead caused by running all those programs on your server.

Not right or wrong... just saying I have read it from both those individuals here, Shaun who I especially tend to take notice of considering the equipment he manages himself.
 
I honestly don't see any significant changes in those results, in the scheme of performance. There is a minimalist improvement, though then you are leaving out the overhead caused by running all those programs on your server.

Not right or wrong... just saying I have read it from both those individuals here, Shaun who I especially tend to take notice of considering the equipment he manages himself.

+1 For DP. I recently had a server configuration query (load balacing and configuration type stuff) and popped him over a very long PM with the informaton and he spend a decent time to reply, even though he didn't have to at all.
 
Top Bottom