Redis cache using a socket

nodle

Well-known member
My host uses a socket for both Memcache and Redis. Apparently it's faster than using an IP. My question is I can't seem to configure it properly. It is enabled under my PHP setting and is active. This is the defualt setting for Redis according to the manual.
$config['cache']['enabled'] = true;
$config['cache']['provider'] = 'Redis';
$config['cache']['config'] = [
'host' => '127.0.0.1',
'password' => 'password'
];
I have tried to paste it under the host, then with no password since I don't have one. Along with changing host to database, and port. Still will not work. Any idea or has anyone used a socket with Redis? Thanks!
 
I may have gotten it to work by using this below in case others run into the same problem. If someone could tell me if this looks ok that would be helpful:
$config['cache']['enabled'] = true;
$config['cache']['provider'] = 'Redis';
$config['cache']['config'] = [
'host' => '/home/pathtoyoursocket.sock',
'port' => 0,
];

Also is there a way to tell if Redis is actually working?
 
You have to use something like this:

Code:
'server' => 'unix:///var/run/redis.sock'

You need to make sure that Redis has permissions to the sock file or it will not work.

Xon also has an example here for the socket setup: https://xenforo.com/community/resources/redis-cache-by-xon.5562/field?field=faq
My redis server path is /server/redis/redis.sock
If I set 'server' => '/server/redis/redis.sock' or 'server' => 'unix:///server/redis/redis.sock' the site does not load.

I am not using plugin and it seems that 'server' option is not recognized.

The only way to make it work is with:
Code:
$config['cache']['enabled'] = true;
$config['cache']['provider'] = 'Redis';
$config['cache']['config'] = [
'host' => '/home/pathtoyoursocket.sock',
'port' => 0,
];
 
Last edited:
I tried running it through a socket for 3 "big boards" on a single host, and there really wasn't a difference in performance worth noting. There was larger improvement going from Redis to Keydb (which is Redis compatible).
 
Host-local redis (ie 127.0.0.1) should generally be as fast as a socket (though sockets are technically superior, yes).

What is really important however is to have redis colocated with XF. Even in a low-latency private network (think ~1ms lat to redis), the difference between local and remote is significant under load.

If impossible to do so, it’s 100% worth it to colocate a redis replica node and have only writes go to the remote redis node. At least in our experience.
 
I tried running it through a socket for 3 "big boards" on a single host, and there really wasn't a difference in performance worth noting. There was larger improvement going from Redis to Keydb (which is Redis compatible).
How many cpu cores/threads did the server have? How are you measuring the improvement? In terms of latency or ops/sec or Xenforo performance?

I just tested Redis vs KeyDB vs Dragonfly - all three Redis compatible with memtier benchmark with 1:15 SET:GET ratio and very interesting results :)

Tested on Intel Xeon E-2276G 6C/12T, 32GB, 2x 960GB NVMe software raid 1 AlmaLinux 8 Centmin Mod LEMP stack server :)

Ops/sec

1692289400605.png
Average latency
1692289437100.png
99% Percentile latency

1692289485380.png
 
Last edited:
I've been keeping my eye out for Dragonfly, but it is disappointing to see such terrible 99% percentile latency statistics. But it is a fairly young project still, but the operational improvements are something that redis badly needs.

There are some outstanding bug reports around latency, so maybe this chart will change soon
 
Last edited:
I don't have the specific details any longer, it must be at least 2 years ago, on a 24 core server. I looked at latency since it is something Googlebot scores against.

I've been keeping a watchful eye at Skytable's progress. It looks to be promising under high concurrent loads, blowing everything else out of the water.
 
How many cpu cores/threads did the server have? How are you measuring the improvement? In terms of latency or ops/sec or Xenforo performance?

I just tested Redis vs KeyDB vs Dragonfly - all three Redis compatible with memtier benchmark with 1:15 SET:GET ratio and very interesting results :)

Ops/sec

View attachment 290021
Average latency
View attachment 290022
99% Percentile latency

View attachment 290023
Did you use --tcp_nodelay with dragonfly (redis sets this by default) ? Apparently this can help the latency figures
 
I've been keeping my eye out for Dragonfly, but it is disappointing to see such terrible 99% percentile latency statistics. But it is a fairly young project still, but the operational improvements are something that redis badly needs.
It's by design straight from Dragonfly devs - they designed it for high CPU core counts - most of their benchmarks are with up to 30+ CPU cores IIRC. See https://github.com/dragonflydb/dragonfly/discussions/1688. Basically if you have single or low CPU core counts keep using Redis
not a bug - but a design choice.

we had many questions like these before. @Niennienzz we should probably add a documentation page on how to benchmark Dragonfly so we could just reply with a single link.

TLDR: if you intend to run your memory store locally or using a single thread - please continue using Redis. It has lower latency for a single connection. Also, there are not many performance advantages when running Dragonfly in a single thread. So if this is your use-case - then do not use Dragonfly.

Every implementation has its own best practices.
Dragonfly can not parallelize efficiently workloads when serving a single connection because it needs to respect the order for the incoming requests on that connection. Dragonfly shines when there are multiple client connections. Dozens, hundreds - the more the merrier. Dragonfly also has limits internally of how much pipelining it allows. AFAIK, the factor is 10, meaning pipelining more than the limit won't produce more throughput. The reason for this is that pipelining adds more latency and more memory pressure and again, the best practice with Dragonfly is to parallelize using multiple connections.



I don't have the specific details any longer, it must be at least 2 years ago, on a 24 core server. I looked at latency since it is something Googlebot scores against.

I've been keeping a watchful eye at Skytable's progress. It looks to be promising under high concurrent loads, blowing everything else out of the water.
Yes saw Skytable's results too - another interesting option :)
Did you use --tcp_nodelay with dragonfly (redis sets this by default) ? Apparently this can help the latency figures
Ah I read about that but didn't set it. Will have to retest to see. But Dragonfly results are inline with their devs claims - they've optimised for high CPU core count servers :)
 
  • Like
Reactions: Xon
Back
Top Bottom