Scale etcd

This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).

Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.

By default, Mayastor allows the creation of three etcd members. If you wish to increase the number of etcd replicas, you will encounter an error. However, you can make the necessary configuration changes discussed in this guide to make it work.

Overview of StatefulSets

StatefulSets are Kubernetes resources designed for managing stateful applications. They provide stable network identities and persistent storage for pods. StatefulSets ensure ordered deployment and scaling, support persistent volume claims, and manage the state of applications. They are commonly used for databases, messaging systems, and distributed file systems. Here's how StatefulSets function:

  • For a StatefulSet with N replicas, when pods are deployed, they are created sequentially in order from {0..N-1}.

  • When pods are deleted, they are terminated in reverse order from {N-1..0}.

  • Before a scaling operation is applied to a pod, all of its predecessors must be running and ready.

  • Before a pod is terminated, all of its successors must be completely shut down.

  • Mayastor uses etcd database for persisting configuration and state information. Etcd is setup as a Kubernetes StatefulSet when Mayastor is installed.

kubectl get dsp -n mayastor

Take a snapshot of the etcd. Click here for the detailed documentation.

  • From etcd-0/1/2, we can see that all the values are registered in the database. Once we scale up etcd with "n" replicas, all the key-value pairs should be available across all the pods.

To scale up the etcd members, the following steps can be performed:

  1. Add a new etcd member

  2. Add a peer URL

  3. Create a PV (Persistent Volume)

  4. Validate key-value pairs

Step 1: Adding a New etcd Member (Scaling Up etcd Replica)

To increase the number of replicas to 4, use the following kubectl scale command:

kubectl scale sts mayastor-etcd -n mayastor --replicas=4

The new pod will be created on available nodes but will be in a pending state as there is no PV/PVC created to bind the volumes.

kubectl get pods -n mayastor -l app=etcd

Step 2: Add a New Peer URL

Before creating a PV, we need to add the new peer URL (mayastor-etcd-3=http://mayastor-etcd-3.mayastor-etcd-headless.mayastor.svc.cluster.local:2380) and change the cluster's initial state from "new" to "existing" so that the new member will be added to the existing cluster when the pod comes up after creating the PV. Since the new pod is still in a pending state, the changes will not be applied to the other pods as they will be restarted in reverse order from {N-1..0}. It is expected that all of its predecessors must be running and ready.

kubectl edit sts mayastor-etcd -n mayastor 

Step 3: Create a Persistent Volume

Create a PV with the following YAML. Change the pod name/claim name based on the pod's unique identity.

This is only for the volumes created with "manual" storage class.

apiVersion: v1
kind: PersistentVolume
  annotations: mayastor mayastor "yes"
  labels: Helm mayastor-etcd-3
  name: etcd-volume-3
  - ReadWriteOnce
    storage: 2Gi
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: data-mayastor-etcd-3
    namespace: mayastor
    path: /var/local/mayastor/etcd/pod-3
    type: ""
  persistentVolumeReclaimPolicy: Delete
  storageClassName: manual
  volumeMode: Filesystem

Step 4: Validate Key-Value Pairs

Run the following command from the new etcd pod and ensure that the values are the same as those in etcd-0/1/2. Otherwise, it indicates a data loss issue.

kubectl exec -it mayastor-etcd-3 -n mayastor -- bash
#etcdctl get --prefix ""

Last updated