Currently configuring Redis server as our backend caching service and during that wrote this post with some things to pay attention at in Redis config file.
Shortly enough but with links to other posts or documentation.
Let’s begin with the
Will be installed alongside with Redis service, so can be used right after server installation:
Also, you can use Redis CLI with
Redis server-level config
Disconnect connection if a client is inactive during N seconds.
Set to the zero to disable (IDLE-client swill stay connected until server will be restart):
... timeout 0 ...
In general – can’t see any reason to change the default’s 300 seconds.
See more at
During that, the Redis server will send ACK-requests (Acknowledgment) after seconds from this parameter to keep a session alive:
... tcp-keepalive 300 ...
Default value 300 seconds (but can change on a Redis version).
If the client will not respond to the ACK-request from the server – this connection will be closed.
tcp-keepalive on the server-side will be set to the zero (i.e. disabled) – then “dead” connections will stay alive until server’s restart.
To see how keep-alive affect performance id general – run benchmark using
tcp-keepalive option (
-k == 1):
And with disabled keep-alive:
Again – I can’t any reason here to change the default value.
Will create a full database copy. See the documentation here
Its behavior is set with the
save option (see also Redis save, SAVE и BGSAVE).
Check the current value:
Save new settings:
In case when Redis is used for caching only – persistence can be disabled, just remove the
save from the config-file at all.
But still, the same mechanism will be used for the master-salve replication, if used (see more in the Redis: replication, part 1 – an overview. Replication vs Sharding. Sentinel vs Cluster. Redis topology. post).
Append Only File – will save each operation performed by a Master host to the log-file.
Similarly to the RDB when we are using Redis for cache only – no need in this option.
To disable – set
appendonly to the no:
... appendonly no ...
maxmemory sets maximum host’s memory limit to be allocated to the Redis.
Can be set as %:
Or to the zero to disable limit at all and is the default value for 64-bit systems. 32-bit systems have 3 GB as the default.
When the limit is reached – Redis will make a decision based on the maxmemory-policy option.
Keeping in mind the fact that we have also
memcached and PHP-FPM workers running on the backend’s hosts – let’s set this limit to 50% RAM.
Sets policy to be used when Redis reaches
Note: LRU – Less Recently Used
Can be one of the next:
volatile-lru: remove less used keys with the
allkeys-lru: remove less used keys regardless of the
volatile-random: remove random keys with the
allkeys-random: remove random keys regardless of the
volatile-ttl: remove a key with lowest TTL left
noeviction: do not delete keys at all – just return an error on the write operations
In our case backend developers are not sure that we are using the
expire for all keys, and knowing the fact that Redis will be used for caching only –
maxmemory-policy allkeys-lru can be set.
In case if Redis and an application are working together on the same host – you can try to use UNIX-sockets instead of the TCP-connections.
Set Redis server to use socket:
... unixsocket /tmp/redis.sock unixsocketperm 755 ...
Can add significant value to the performance, see more at the
Let’s check with the
Create a test-conf:
unixsocket /tmp/redis.sock unixsocketperm 775 port 0
Run with the socket:
And via TCP port:
90909.09 vs 66666.67 – pretty obvious.
A log’s destabilization level. With debug – most detailed and thus more expensive for host’s resources (CPU, etc).
Can be set to the debug, verbose, notice, warning.
By default – notice, and while the application still in the configuration state can be left with this value.
Transparent Huge Page
The Linux kernel feature to leverage objects to manipulate during virtual memory allocation and management. See more at the
In the Redis, judging by the
You can check the current value by calling:
The value in the  is the currently used one – madvice.
madvice set the Kernel to use THP only in the case when it’s requested directly with the
THP usage can be checked with the next command:
Set maximum clients connected at the same time.
The default value is 10.000 and can be overwritten via the
maxclients, see the
At the same time – Redis will check an operating system limits as well as the kernel level limit –
ulimit as per-user limit per process:
For systemd-based systems, this limit can be set in the Redis unit-file by the
Redis can set the client’s connections queue to the value specified in the
tcp-backlog (511 by default).
Still, the operating system has its own limit –
net.core.somaxconn and if it is less then the Redis’ limit – then the warning will be produced:
The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128
What is the TCP backlog and
To better understand where and how the
tcp-backlog is applied and the
net.core.somaxconn‘s role – let’s review how a TCP session is established:
- a server: an application executes the
syscall passing to it a file descriptor to a socket, and in the second argument – the accept backlog size (the
tcp-backlogvalue taken from the
- a client: an application on the client-side executes
connect()call and sends an SYN packet to the server
- on the client side – the connection changes its state to the SYN_SENT
- on the server’s side: a new connection is set to the SYN_RCVD state and will be saved in the syn backlog (
net.ipv4.tcp_max_syn_backlog) – incomplete connection queue
- the server: sends the SYN+ACK
- the client: sends ACK, and changes the connection’s state to the ESTABLISHED state
- the server: accepts ACK and set the connection state to the ESTABLISHED and moves it to the accept backlog – complete connection queue
- the server: executes the
call passing a connection from the accept backlog
- the client: executes the
write()call and starts sending data
- the server: calls the
read()syscall and starts receiving data
So, if Redis will pass the
tcp-backlog value to the
listen() greater then the kernel has in its limit in the
net.core.somaxconn setting – you’ll get the “TCP backlog setting cannot be enforced” message.
The default value is 128:
And can be updated by the the
Well, the most ambiguous parameter, as for me.
I’d highly recommend reading the Redis: fork – Cannot allocate memory, Linux, virtual memory and vm.overcommit_memory.
overcommit_memory steps in when Redis creates data snapshotting from the memory on the disk, specifically during the
In our current case, when Redis is used for caching only and has no RDB or AOF backups enabled – no need to change the
overcommit_memory and best to leave it with its default value – 0.
In the case when you really want to set the boundaries by yourself – it’s best to use
overcommit_memory == 2 and limit the overcommit by setting the
If an operating system has SWAP configured – it can dump some Redis’ data to the disk and later when Redis will try to access them – it can take a long time to read them back to the memory.
To avoid this – disable swap completely:
redis.conf sentinel.conf 5 Tips for Running Redis over AWS Redis: I like you, but you’re crazy Redis Best Practices and Performance Tuning Learn Redis the hard way (in production) Optimizing Redis Usage For Caching Benchmarking the experimental Redis Multi-Threaded I/O Redis configuration for production Things that you may want to know about TCP Keepalives Redis Configuration Controls Understanding the Top 5 Redis Performance Metric Сollection of our notes to tweak redis Влияние Transparent Huge Pages на производительность системы Redis latency due to Transparent Huge Pages Transparent Hugepages: measuring the performance impact Disable Transparent Hugepages Running Redis in production(2014 год) Настройка Redis
Also published on