The task is to spin up a Redis instance in a Kubernetes cluster.
Will use the Master-Slave replication setup with Sentinels for monitoring and failover operations.
Check the Redis: replication, part 2 – Master-Slave replication, and Redis Sentinel post for more details.
Redis cluster vs Redis replication
- Replica – includes a Redis Master instance that performs read-write operations and copies data to its Redis Slaves instance(s) which serves Read-only operations. During this, such a Salve can be promoted to the Master’s role if its Master fails.
- Cluster – have a sense when your Redis have more data than your server’s RAM. The Cluster can use Sharding and a client requesting a piece of data will be redirected to a node that keeps that data.
Ways to run Redis in Kubernetes
Let’s see how we can perform the task – to run a Redis with replication in a Kubernetes cluster.
- manual setup – see the How to Create a Primary-Replica Redis Cluster in Kubernetes
- Redis Operator – redis-operator
- Helm chart which the Redis Cluster – https://bitnami.com/stack/redis-cluster
- Helm with Redis Master-Slave replication – https://bitnami.com/stack/redis (our choice in this post)
In our current case, we don’t need to worry about data’s persistence as our Redis will be used as a cache service only so we don’t need for a Kubernetes PersistentVolume.
Helm chart deploy
At first, we will run Redis services from the chart, will take a short look onto them, and then will proceed to the available parameters.
Add the Bitnami repository to your Helm:
Deploy the Redis chart:
Get the password which was generated during the chart’s deployment:
port-forward to connect to the Redis Master instance:
“It works!” (с)
Check which Services do we have here:
Its type is the
NodePort while we need for a
Also, we need for Redis Sentinel, which is off by default:
||Enable sentinel containers||
Okay, let’s go to the options to enable Sentinel and configure a LoadBalancer.
Create a list of options that can be useful for you.
In my case this will be:
- sentinel.service.type – need for LBso clients can ask for Master/Slaves
Create a new file
~/Temp/redis-opts.yaml to keep our desired parameters:
global: redis: password: "blablacar" metrics: enabled: true serviceMonitor: enabled: true namespace: "monitoring" master: persistence: enabled: false service: type: LoadBalancer annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internal slave: persistence: enabled: false service: type: LoadBalancer annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internal sentinel: enabled: true service: type: LoadBalancer annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internal
Update the deployment with
-f to specify our parameters file:
Check the password:
But why the LoadBalancer is Public when it was set to be internal?
This because the
alb.ingress.kubernetes.io/scheme: internal is used for the ALB Ingress Controller while the chart creates a simple Kubernetes Service with the
LoadBalancer type which will create an AWS Classic Load Balancer.
Read the documentation – https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer and update our annotations: instead of the “
alb.ingress.kubernetes.io/scheme: internal” – specify the “
... master: service: type: LoadBalancer annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" slave: service: type: LoadBalancer annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" sentinel: enabled: true service: type: LoadBalancer annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Update the deployment and check again:
Now, make a Write operation – must be served by the Redis Master instance:
And a Read operation:
And Sentinel status:
Although write operations need to be performed after getting the Master’s address via Sentinels:
Check the documentation.
Also published on Medium.