The whole story was started when we decided to get rid of
Currently, we have
memcahced and Redis running on our backend servers.
memcached, and Redis instances are working as standalone applications, i.e. they are not connected in any kind of replication and this leads to a problem:
- we have three backend hosts which are hosted behind an AWS Application Load Balancer
- an ALB has Sticky Sessions enabled but it’s working with cookies which are ignored by our mobile applications (iOS/Android)
- respectively, when a client makes a request to the backend – sometimes it can get cached data which already was removed/updated on another backend host in Redis or
We have this scheme since we migrated our backend application from an old infrastructure where the only one host was used and still had no time to update it, although it was in our planes a long time.
Currently, to solve these issues we have a bunch of “hacks” on the backend which makes additional checks to ensure data is up to date, and now to get rid of them we decided to:
- get rid of
memcachedat all as Redis can be used for the functions where
memcachedis used now
- configure Redis replication over all hosts
Such a setup will be described in the post below.
The first example – with a basic Master-Slave replication and the second example – the Sentinel set up and configuration.
AWS EC2 instances with Debian 9 will be used here.
To work with Redis hosts three domain names will be used – redis-0.setevoy.org.ua for a master, redis-1.setevoy.org.ua and redis-2.setevoy.org.ua for its two slaves.
Slave in a minimal setup can be only one but as the second example here will be with the Sentinel – let’s have three from the beginning.
The basic Master-Slave replication
In this way, slaves will be a master’s read-only replicas keeping the same data which will be added to the master.
Master will send all data updates to its slaves – new keys expire, etc.
In case of the link between master and slave will be broken – a slave will try to reconnect to the master and make a partial synchronization to update data from a place where the previous sync was interrupted.
In case of such a partial sync is not possible – the slave will ask the master for a full synchronization and master will perform its data full snapshot which will be sent to this slave after this a usual sync will be restored.
A couple of notes here to keep in mind:
- one master can have multitype slaves
- slaves can accept connections from other slaves, making kind of “cascade” of a replicated nodes – a master on the top, a slave(s) in the middle and a slave(s) at the bottom
- it’s strongly recommended to enable data persistence on the master to avoid data loss – see the Safety of replication when master has persistence turned off
- slave will work in the read-only mode by default, see the Read-only slave
Redis Master configuration
/etc/redis/redis.conf and in the
bind set interfaces to listen on:
... bind 0.0.0.0 ...
You can specify multitype IPs here separated by spaces:
... bind 127.0.0.1 184.108.40.206 ...
Other valuable options here:
port 6379– clear enough but keep it in mind
slave-read-only yes– slaves will be working in the read-only mode, doesn’t affect a master node
requirepass foobared– password for master authorization
appendfilename "appendonly.aof"– decrease data loss chance, see the Redis Persistence
Restart the service:
Check it using
-a for the password:
Check data replication status:
Add a new data:
Get it back:
Okay – everything works here
Redis Slave configuration
On the two hosts left make a slaves configuration.
It will be the same for both – just repeat it.
... slaveof redis-0.setevoy.org.ua 6379 ... masterauth foobared ... requirepass foobared ...
slaveof– set the master’s host and port
masterauth– master’s auth
requirepass– auth on this replica
Restart the service:
Check its status:
Connection to the master established, syncronization is done – okay, check the data:
Data present – all works here as well.
Changing Slave => Master roles
In case of the master will go down – you have to switch one of the slaves to become a new master.
If you’ll try to add any data on a current slave – Redis will rise an error as the slaves are in the read-only mode:
... slave-read-only yes ...
Try to add something:
Now connect to the slave:
Disable the slave-role:
Check its status now:
Add a new key one more time:
And get it back:
Keep in mind that as we did those changes in Redis node directly – after its restart it will become a slave again as it’s still is set in its
/etc/redis/redis.conf file with the
Now let’s add the Sentinel to our replication which will monitor Redis nodes and will do roles switches automatically.
The overall scheme will be next:
- M1 = Master
- R1 = Replica 1 / Slave 1
- R2 = Replica 2 / Slave 2
- S1 = Sentinel 1
- S2 = Sentinel 2
- S3 = Sentinel 3
M1 and S1 – will be on the redis-0, R1 and S2 – on the redis-1, R2 and S3 – on the redis-2.
To run a Sentinel daemon the
redis-server can be used just with a separate config –
First, let’s create such config file on the Redis Master host:
sentinel monitor redis-test redis-0.setevoy.org.ua 6379 2 sentinel down-after-milliseconds redis-test 6001 sentinel failover-timeout redis-test 60000 sentinel parallel-syncs redis-test 1 bind 0.0.0.0 sentinel auth-pass redis-test foobared
monitor– the master-node address to be monitored, and the 2 is the Sentinel’s instances number to make a decisions
down-after-milliseconds– time after which master will be considered as out of order
failover-timeout– time to wait after changing slave=>master roles
parallel-syncs– number of simultaneous slaves synchronization after the master changed
Check Sentinel’s status using the 26379 port:
master0:name=redis-test,status=ok– master is UP
slaves=2– it has two slaves
sentinels=1– only one Sentinel instance is running for now
You can get some basic information here, for example – the master’s IP:
Now repeat Sentinel start on both slaves nodes using the same config as we did on the master and in the Sentinel’s log you must see new instances connected:
Check status again:
sentinels=3 – okay.
Also, Sentinel will perform its own settings updates when needed:
Here is the sentinel myid fdc750c7d6388a6142d9e27b68172f5846e75d8c line added and the whole block after the #Generated by CONFIG REWRITE.
Redis Sentinel Automatic Failover
Now let’s check what will happen if the master will go down.
You can do it manually just by calling
kill -9 or by using the
redis-cli and sending the
DEBUG command with a time in seconds to make a master “down” or by sending a signal to kill the master with.
The Sentinel’s log on the master:
Currently, we are interested in those two lines here:
Sentinel performed the slave-to-master reconfiguration.
220.127.116.11 – is the old master which is dead now, and 18.104.22.168 is a new master, elected from the slaves – it’s running on the redis-1 host.
Try adding data here:
While a similar attempt on the old master which became a slave now will lead to an error:
And let’s kill a node at all and see what Sentinel will do now:
Well – Sentinel just restarted that node
||list all masters and their statuses|
||one master’s status|
||list all slaves and their statuses|
||list all Sentinel instances and their statuses|
||run failover manually|
||force Sentinel to rewrite its configuration on disk|
||add a new master|
||remove master from being watched|
- Redis Replication
- Пример файла настроек Sentinel
- Redis Sentinel — High Availability: Everything you need to know from DEV to PROD: Complete Guide
- Redis Sentinel: Make your dataset highly available
- How to run Redis Sentinel
Also published on Medium.