Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Before deploying and using Mayastor please consult the Known Issues section of this guide.
The steps and commands which follow are intended only for use in conjunction with Mayastor version(s) 2.1.x.
Installing Mayastor via the Helm chart sets the StorageClass as 'default' for Loki and etcd StatefulSets. This will work in clusters which have a StorageClass named 'default'. If there is no 'default' StorageClass in the cluster, storage provisioning will fail for Loki and etcd. In such cases, one can change the StorageClass to 'manual' which will provision a PersistentVolume of type hostPath. To do this, make the following changes in the values.yaml
file:
loki.persistence.storageClassName: manual
etcd.persistence.storageClassName: manual
Add the OpenEBS Mayastor Helm repository.
Run the following command to discover all the stable versions of the added chart repository:
To discover all the versions (including unstable versions), execute: helm search repo mayastor --devel --versions
Run the following command to install Mayastor _version 2.1.
Verify the status of the pods by running the command:
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The objective of this section is to provide the user and evaluator of Mayastor with a topological view of the gross anatomy of a Mayastor deployment. A description will be made of the expected pod inventory of a correctly deployed cluster, the roles and functions of the constituent pods and related Kubernetes resource types, and of the high level interactions between them and the orchestration thereof.
More detailed guides to Mayastor's components, their design and internal structure, and instructions for building Mayastor from source, are maintained within the .
Name | Resource Type | Function | Frequency in Cluster |
---|
The io-engine pod encapsulates Mayastor containers, which implement the I/O path from the block devices at the persistence layer, up to the relevant initiators on the worker nodes mounting volume claims. The Mayastor process running inside this container performs four major functions:
Creates and manages DiskPools hosted on that node.
Creates, exports, and manages volume controller objects hosted on that node.
Creates and exposes replicas from DiskPools hosted on that node over NVMe-TCP.
Provides a gRPC interface service to orchestrate the creation, deletion and management of the above objects, hosted on that node. Before the io-engine pod starts running, an init container attempts to verify connectivity to the agent-core in the namespace where Mayastor has been deployed. If a connection is established, the io-engine pod registers itself over gRPC to the agent-core. In this way, the agent-core maintains a registry of nodes and their supported api-versions. The scheduling of these pods is determined declaratively by using a DaemonSet specification. By default, a nodeSelector field is used within the pod spec to select all worker nodes to which the user has attached the label openebs.io/engine=mayastor
as recipients of an io-engine pod. In this way, the node count and location are set appropriately to the hardware configuration of the worker nodes, and the capacity and performance demands of the cluster.
The csi-node pods within a cluster implement the node plugin component of Mayastor's CSI driver. As such, their function is to orchestrate the mounting of Mayastor-provisioned volumes on worker nodes on which application pods consuming those volumes are scheduled. By default, a csi-node pod is scheduled on every node in the target cluster, as determined by a DaemonSet resource of the same name. Each of these pods encapsulates two containers, csi-node, and csi-driver-registrar. The node plugin does not need to run on every worker node within a cluster and this behavior can be modified, if desired, through the application of appropriate node labeling and the addition of a corresponding nodeSelector entry within the pod spec of the csi-node DaemonSet. However, it should be noted that if a node does not host a plugin pod, then it will not be possible to schedule an application pod on it, which is configured to mount Mayastor volumes.
etcd is a distributed reliable key-value store for the critical data of a distributed system. Mayastor uses etcd as a reliable persistent store for its configuration and state data.
The supportability tool is used to create support bundles (archive files) by interacting with multiple services present in the cluster where Mayastor is installed. These bundles contain information about Mayastor resources like volumes, pools and nodes, and can be used for debugging. The tool can collect the following information:
Topological information of Mayastor's resource(s) by interacting with the REST service
Historical logs by interacting with Loki. If Loki is unavailable, it interacts with the kube-apiserver to fetch logs.
Mayastor-specific Kubernetes resources by interacting with the kube-apiserver
Mayastor-specific information from etcd (internal) by interacting with the etcd server.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
When a node allocates storage capacity for a replica of a persistent volume (PV) it does so from a DiskPool. Each node may create and manage zero, one, or more such pools. The ownership of a pool by a node is exclusive. A pool can manage only one block device, which constitutes the entire data persistence layer for that pool and thus defines its maximum storage capacity.
A pool is defined declaratively, through the creation of a corresponding DiskPool
custom resource on the cluster. The DiskPool must be created in the same namespace where Mayastor has been deployed. User configurable parameters of this resource type include a unique name for the pool, the node name on which it will be hosted and a reference to a disk device which is accessible from that node. The pool definition requires the reference to its member block device to adhere to a discrete range of schemas, each associated with a specific access mechanism/transport/ or device type.
Mayastor versions before 2.0.1 had an upper limit on the number of retry attempts in the case of failure in create events
in the DSP operator. With this release, the upper limit has been removed, which ensures that the DiskPool operator indefinitely reconciles with the CR.
spec.disks
under DiskPool CRIt is highly recommended to specify the disk using a unique device link that remains unaltered across node reboots. One such device link is its UUID
. To get the UUID of a disk, execute: sudo blkid
Usage of the device name (for example, /dev/sdx) is not advised, as it may change if the node reboots, which might cause data corruption.
Type | Format | Example |
---|
Once a node has created a pool it is assumed that it henceforth has exclusive use of the associated block device; it should not be partitioned, formatted, or shared with another application or process. Any pre-existing data on the device will be destroyed.
A RAM drive isn't suitable for use in production as it uses volatile memory for backing the data. The memory for this disk emulation is allocated from the hugepages pool. Make sure to allocate sufficient additional hugepages resource on any storage nodes which will provide this type of storage.
To get started, it is necessary to create and host at least one pool on one of the nodes in the cluster. The number of pools available limits the extent to which the synchronous N-way mirroring (replication) of PVs can be configured; the number of pools configured should be equal to or greater than the desired maximum replication factor of the PVs to be created. Also, while placing data replicas ensure that appropriate redundancy is provided. Mayastor's control plane will avoid placing more than one replica of a volume on the same node. For example, the minimum viable configuration for a Mayastor deployment which is intended to implement 3-way mirrored PVs must have three nodes, each having one DiskPool, with each of those pools having one unique block device allocated to it.
Using one or more the following examples as templates, create the required type and number of pools.
When using the examples given as guides to creating your own pools, remember to replace the values for the fields "metadata.name", "spec.node" and "spec.disks" as appropriate to your cluster's intended configuration. Note that whilst the "disks" parameter accepts an array of values, the current version of Mayastor supports only one disk device per pool.
The status of DiskPools may be determined by reference to their cluster CRs. Available, healthy pools should report their State as online
. Verify that the expected number of pools have been created and that they are online.
Mayastor-2.0.1 adds two new fields to the DiskPool operator YAML:
status.cr_state: The cr_state
, which can either be creating, created or terminating, will be used by the operator to reconcile with the CR. The cr_state
is set to Terminating
when a CR delete event is received.
status.pool_status: The pool_status
represents the status of the respective control plane pool resource.
We illustrate this quickstart guide with two examples of possible use cases; one which offers no data redundancy (i.e. a single data replica), and another having three data replicas.
The default installation of Mayastor includes the creation of a StorageClass with the name mayastor-single-replica
. The replication factor is set to 1. Users may either use this StorageClass or create their own.
aggregates and centrally stores logs from all Mayastor containers which are deployed in the cluster.
is a log collector built specifically for Loki. It uses the configuration file for target discovery and includes analogous features for labeling, transforming, and filtering logs from containers before ingesting them to Loki.
Pool configuration and state information can also be obtained by using the
Mayastor dynamically provisions PersistentVolumes (PVs) based on StorageClass definitions created by the user. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. For a detailed description of these parameters see . Most importantly StorageClass definition is used to control the level of data protection afforded to it (that is, the number of synchronous data replicas which are maintained, for purposes of redundancy). It is possible to create any number of StorageClass definitions, spanning all permitted parameter permutations.
Both the example YAMLs given below have enabled. You can modify these as required to match your own desired test cases, within the limitations of the cluster under test.
Control Plane |
agent-core | Deployment | Principle control plane actor | Single |
csi-controller | Deployment | Hosts Mayastor's CSI controller implementation and CSI provisioner side car | Single |
api-rest | Pod | Hosts the public API REST server | Single |
api-rest | Service | Exposes the REST API server |
operator-diskpool | Deployment | Hosts DiskPool operator | Single |
csi-node | DaemonSet | Hosts CSI Driver node plugin containers | All worker nodes |
etcd | StatefulSet | Hosts etcd Server container | Configurable(Recommended: Three replicas) |
etcd | Service | Exposes etcd DB endpoint | Single |
etcd-headless | Service | Exposes etcd DB endpoint | Single |
io-engine | DaemonSet | Hosts Mayastor I/O engine | User-selected nodes |
DiskPool | CRD | Declares a Mayastor pool's desired state and reflects its current state | User-defined, one or many |
Additional components |
metrics-exporter-pool | Sidecar container (within io-engine DaemonSet) | Exports pool related metrics in Prometheus format | All worker nodes |
pool-metrics-exporter | Service | Exposes exporter API endpoint to Prometheus | Single |
promtail | DaemonSet | Scrapes logs of Mayastor-specific pods and exports them to Loki | All worker nodes |
loki | StatefulSet | Stores the historical logs exported by promtail pods | Single |
loki | Service | Exposes the Loki API endpoint via ClusterIP | Single |
Disk(non PCI) with disk-by-guid reference (Best Practice) | Device File | aio:////dev/disk/by-uuid/ OR uring:////dev/disk/by-uuid/ |
Asynchronous Disk(AIO) | Device File | /dev/sdx |
Asynchronous Disk I/O (AIO) | Device File | aio:///dev/sdx |
io_uring | Device File | uring:///dev/sdx |
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
This quickstart guide describes the actions necessary to perform a basic installation of Mayastor on an existing Kubernetes cluster, sufficient for evaluation purposes. It assumes that the target cluster will pull the Mayastor container images directly from OpenEBS public container repositories. Where preferred, it is also possible to build Mayastor locally from source and deploy the resultant images but this is outside of the scope of this guide.
Deploying and operating Mayastor in production contexts requires a foundational knowledge of Mayastor internals and best practices, found elsewhere within this documentation.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
If all verification steps in the preceding stages were satisfied, then Mayastor has been successfully deployed within the cluster. In order to verify basic functionality, we will now dynamically provision a Persistent Volume based on a Mayastor StorageClass, mount that volume within a small test pod which we'll create, and use the utility within that pod to check that I/O to the volume is processed correctly.
Use kubectl
to create a PVC based on a StorageClass that you created in the . In the example shown below, we'll consider that StorageClass to have been named "mayastor-1". Replace the value of the field "storageClassName" with the name of your own Mayastor-based StorageClass.
For the purposes of this quickstart guide, it is suggested to name the PVC "ms-volume-claim", as this is what will be illustrated in the example steps which follow.
If you used the storage class example from previous stage, then volume binding mode is set to WaitForFirstConsumer
. That means, that the volume won't be created until there is an application using the volume. We will go ahead and create the application pod and then check all resources that should have been created as part of that in kubernetes.
The Mayastor CSI driver will cause the application pod and the corresponding Mayastor volume's NVMe target/controller ("Nexus") to be scheduled on the same Mayastor Node, in order to assist with restoration of volume and application availabilty in the event of a storage node failure.
In this version, applications using PVs provisioned by Mayastor can only be successfully scheduled on Mayastor Nodes. This behaviour is controlled by the local:
parameter of the corresponding StorageClass, which by default is set to a value of true
. Therefore, this is the only supported value for this release - setting a non-local configuration may cause scheduling of the application pod to fail, as the PV cannot be mounted to a worker node other than a MSN. This behaviour will change in a future release.
We will now verify the Volume Claim and that the corresponding Volume and Mayastor Volume resources have been created and are healthy.
The status of the PVC should be "Bound".
Substitute the example volume name with that shown under the "VOLUME" heading of the output returned by the preceding "get pvc" command, as executed in your own cluster
The status of the volume should be "online".
Verify that the pod has been deployed successfully, having the status "Running". It may take a few seconds after creating the pod before it reaches that status, proceeding via the "ContainerCreating" state.
Note: The example FIO pod resource declaration included with this release references a PVC named ms-volume-claim
, consistent with the example PVC created in this section of the quickstart. If you have elected to name your PVC differently, deploy the Pod using the example YAML, modifying the claimName
field appropriately.
We now execute the FIO Test utility against the Mayastor PV for 60 seconds, checking that I/O is handled as expected and without errors. In this quickstart example, we use a pattern of random reads and writes, with a block size of 4k and a queue depth of 16.
If no errors are reported in the output then Mayastor has been correctly configured and is operating as expected. You may create and consume additional Persistent Volumes with your own test applications.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
2MiB-sized Huge Pages must be supported and enabled on the mayastor storage nodes. A minimum number of 1024 such pages (i.e. 2GiB total) must be available exclusively to the Mayastor pod on each node, which should be verified thus:
If fewer than 1024 pages are available then the page count should be reconfigured on the worker node as required, accounting for any other workloads which may be scheduled on the same node and which also require them. For example:
This change should also be made persistent across reboots by adding the required value to the file/etc/sysctl.conf
like so:
If you modify the Huge Page configuration of a node, you MUST either restart kubelet or reboot the node. Mayastor will not deploy correctly if the available Huge Page count as reported by the node's kubelet instance does not satisfy the minimum requirements.
All worker nodes which will have Mayastor pods running on them must be labelled with the OpenEBS engine type "mayastor". This label will be used as a node selector by the Mayastor Daemonset, which is deployed as a part of the Mayastor data plane components installation. To add this label to a node, execute:
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Storage class resource in Kubernetes is used to supply parameters to volumes when they are created. It is a convenient way of grouping volumes with common characteristics. All parameters take a string value. Brief explanation of each supported Mayastor parameter follows.
The storage class parameter local
has been deprecated and is a breaking change in Mayastor version 2.0. Ensure that this parameter is not used.
File system that will be used when mounting the volume. The default file system when not specified is 'ext4'. We recommend to use 'xfs' that is considered to be more advanced and performant. Though make sure that XFS is installed on all nodes in the cluster before using it.
Expressed in seconds and it sets the io_timeout
parameter in the linux block device driver for Mayastor volume. It also sets ctrl-loss-tmo
timeout in linux NVMe driver that is used for detecting inaccessible target (nexus in our case). In either case, if IO cannot be served by Mayastor volume, because it is stuck or because the nvmf target is not there, it will take roughly this time to the initiator to return an error to data consumer, which can be either filesystem or an application if using raw block volume. The usual reaction of a filesystem to such error is to switch the filesystem to read-only state and require manual intervention from user - remounting. Be careful when setting this to a low value because any node reboot that does not fit into this time interval may require manual intervention afterwards or can result in application failure - depending on how the error is handled by the application.
The setting is supported only when using "nvmf" protocol.
The parameter 'protocol' takes the value nvmf
(NVMe over TCP protocol). It is used to mount the volume (target) on the application node.
The string value should be a number and the number should be greater than zero. Mayastor control plane will try to keep always this many copies of the data if possible. If set to one then the volume does not tolerate any node failure. If set to two, then it tolerates one node failure. If set to three, then two node failures, etc.
The agents.core.capacity.thin
spec present in the Mayastor helm chart consists of the following configurable parameters that can be used to control the scheduling of thinly provisioned replicas:
poolCommitment parameter specifies the maximum allowed pool commitment limit (in percent).
volumeCommitment parameter specifies the minimum amount of free space that must be present in each replica pool in order to create new replicas for an existing volume. This value is specified as a percentage of the volume size.
volumeCommitmentInitial minimum amount of free space that must be present in each replica pool in order to create new replicas for a new volume. This value is specified as a percentage of the volume size.
Note:
By default, the volumes are provisioned as thick
.
For a pool of a particular size, say 10 Gigabytes, a volume > 10 Gigabytes cannot be created, as Mayastor 2.1.0 does not support pool expansion.
The replicas for a given volume can be either all thick or all thin. Same volume cannot have a combination of thick and thin replicas
To verify the status of volume is used.
The volumes can either be thick
or thin
provisioned. Adding the thin
parameter to the StorageClass YAML allows the volume to be thinly provisioned. To do so, add thin: true
under the parameters
spec, in the StorageClass YAML. When the volumes are thinly provisioned, the user needs to monitor the pools, and if these pools start to run out of space, then either new pools must be added or volumes deleted to prevent thinly provisioned volumes from getting degraded or faulted. This is because when a pool with more than one replica runs out of space, Mayastor moves the largest out-of-space replica to another pool and then executes a rebuild. It then checks if all the replicas have sufficient space; if not, it moves the next largest replica to another pool, and this process continues till all the replicas have sufficient space.
The capacity usage on a pool can be monitored using .
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
All worker nodes must satisfy the following requirements:
x86-64 CPU cores with SSE4.2 instruction support
(Tested on) Linux kernel 5.15 (Recommended) Linux kernel 5.13 or higher. The kernel should have the following modules loaded:
nvme-tcp
ext4 and optionally xfs
Helm version must be v3.7 or later.
Each worker node which will host an instance of an io-engine pod must have the following resources free and available for exclusive use by that pod:
Two CPU cores
1GiB RAM
HugePage support
A minimum of 2GiB of 2MiB-sized pages
Ensure that the following ports are not in use on the node:
10124: Mayastor gRPC server will use this port.
8420 / 4421: NVMf targets will use these ports.
The firewall settings should not restrict connection to the node.
Disks must be unpartitioned, unformatted, and used exclusively by the DiskPool.
The minimum capacity of the disks should be 10 GB.
Kubernetes core v1 API-group resources: Pod, Event, Node, Namespace, ServiceAccount, PersistentVolume, PersistentVolumeClaim, ConfigMap, Secret, Service, Endpoint, Event.
Kubernetes batch API-group resources: CronJob, Job
Kubernetes apps API-group resources: Deployment, ReplicaSet, StatefulSet, DaemonSet
Kubernetes storage.k8s.io
API-group resources: StorageClass, VolumeSnapshot, VolumeSnapshotContent, VolumeAttachment, CSI-Node
Kubernetes apiextensions.k8s.io
API-group resources: CustomResourceDefinition
Mayastor Custom Resources that is openebs.io
API-group resources: DiskPool
Custom Resources from Helm chart dependencies of Jaeger that is helpful for debugging:
ConsoleLink Resource from console.openshift.io
API group
ElasticSearch Resource from logging.openshift.io
API group
Kafka and KafkaUsers from kafka.strimzi.io
API group
ServiceMonitor from monitoring.coreos.com
API group
Ingress from networking.k8s.io
API group and from extensions API group
Route from route.openshift.io
API group
All resources from jaegertracing.io
API group
The minimum supported worker node count is three nodes. When using the synchronous replication feature (N-way mirroring), the number of worker nodes to which Mayastor is deployed should be no less than the desired replication factor.
Mayastor supports the export and mounting of volumes over NVMe-oF TCP only. Worker node(s) on which a volume may be scheduled (to be mounted) must have the requisite initiator support installed and configured. In order to reliably mount Mayastor volumes over NVMe-oF TCP, a worker node's kernel version must be 5.13 or later and the nvme-tcp kernel module must be loaded.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Cordoning a node marks or taints the node as unschedulable. This prevents the scheduler from deploying new resources on that node. However, the resources that were deployed prior to cordoning off the node will remain intact.
This feature is in line with the node-cordon functionality of Kubernetes.
To add a label and cordon a node, execute:
To get the list of cordoned nodes, execute:
To view the labels associated with a cordoned node, execute:
In order to make a node schedulable again, execute:
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The node drain functionality marks the node as unschedulable and then gracefully moves all the volume targets off the drained node. This feature is in line with the node drain functionality of Kubernetes.
To start the drain operation, execute:
To get the list of nodes on which the drain operation has been performed, execute:
To halt the drain operation or to make the node schedulable again, execute:
This section provides an overview of the topology and function of the Mayastor data plane. Developer level documentation is maintained within the project's GitHub repository.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
An instance of the mayastor
binary running inside a Mayastor container, which is encapsulated by a Mayastor Pod.
Mayastor terminology. A data structure instantiated within a Mayastor instance which performs I/O operations for a single Mayastor volume. Each nexus acts as an NVMe controller for the volume it exports. Logically it is composed chiefly of a 'static' function table which determines its base I/O handling behaviour (held in common with all other nexus of the cluster), combined with configuration information specific to the Mayastor volume it exports, such as the identity of its children. The function of a nexus is to route I/O requests for its exported volume which are received on its host container's target to the underlying persistence layer, via any applied transformations ("data services"), and to return responses to the calling initiator back along that same I/O path.
Mayastor's volume management abstraction. Block devices contributing storage capacity to a Mayastor deployment do so by their inclusion within configured storage pools. Each Mayastor node can host zero or more pools and each pool can "contain" a single base block device as a member. The total capacity of the pool is therefore determined by the size of that device. Pools can only be hosted on nodes running an instance of a mayastor pod.
Multiple volumes can share the capacity of one pool but thin provisioning is not supported. Volumes cannot span multiple pools for the purposes of creating a volume larger in size than could be accommodated by the free capacity in any one pool.
Internally a storage pool is an implementation of an SPDK Logical Volume Store
A code abstraction of a block-level device to which I/O requests may be sent, presenting a consistent device-independent interface. Mayastor's bdev abstraction layer is based upon that of Intel's Storage Performance Development Kit (SPDK).
base bdev - Handles I/O directly, e.g. a representation of a physical SSD device
logical volume - A bdev representing an SPDK Logical Volume ("lvol bdev")
Mayastor terminology. An lvol bdev (a "logical volume", created within a pool and consuming pool capacity) which is being exported by a Mayastor instance, for consumption by a nexus (local or remote to the exporting instance) as a "child"
Mayastor terminology. A NVMe controller created and owned by a given Nexus and which handles I/O downstream from the nexus' target, by routing it to a replica associated with that child.
A nexus has a minimum of one child, which must be local (local: exported as a replica from a pool hosted by the same mayastor instance as hosts the nexus itself). If the Mayastor volume being exported by the nexus is derived from a StorageClass with a replication factor greater than 1 (i.e. synchronous N-way mirroring is enabled), then the nexus will have additional children, up to the desired number of data copies.
To allow the discovery of, and acceptance of I/O for, a volume by a client initiator, over a Mayastor storage target.
For volumes based on a StorageClass defined as having a replication factor of 1, a single data copy is maintained by Mayastor. The I/O path is largely (entirely, if using malloc:/// pool devices) constrained to within the bounds of a single mayastor instance, which hosts both the volume's nexus and the storage pool in use as its persistence layer.
Each mayastor instance presents a user-space storage target over NVMe-oF TCP. Worker nodes mounting a Mayastor volume for a scheduled application pod to consume are directed by Mayastor's CSI driver implementation to connect to the appropriate transport target for that volume and perform discovery, after which they are able to send I/O to it, directed at the volume in question. Regardless of how many volumes, and by extension how many nexus a mayastor instance hosts, all share the same target instances.
Application I/O received on a target for a volume is passed to the virtual bdev at the front-end of the nexus hosting that volume. In the case of a non-replicated volume, the nexus is composed of a single child, to which the I/O is necessarily routed. As a virtual bdev itself, the child handles the I/O by routing it to the next device, in this case the replica that was created for this child. In non-replicated scenarios, both the volume's nexus and the pool which hosts its replica are co-located within the same mayastor instance, hence the I/O is passed from child to replica using SPDK bdev routines, rather than a network level transport. At the pool layer, a blobstore maps the lvol bdev exported as the replica concerned to the base bdev on which the pool was constructed. From there, other than for malloc:/// devices, the I/O passes to the host kernel via either aio or io_uring, thence via the appropriate storage driver to the physical disk device.
The disk devices' response to the I/O request is returns back along the same path to the caller's initiator.
If the StorageClass on which a volume is based specifies a replication factor of greater than one, then a synchronous mirroring scheme is employed to maintain multiple redundant data copies. For a replicated volume, creation and configuration of the volume's nexus requires additional orchestration steps. Prior to creating the nexus, not only must a local replica be created and exported as for the non-replicated case, but the requisite count of additional remote replicas required to meet the replication factor must be created and exported from Mayastor instances other than that hosting the nexus itself. The control plane core-agent component will select appropriate pool candidates, which includes ensuring sufficient available capacity and that no two replicas are sited on the same Mayastor instance (which would compromise availability during co-incident failures). Once suitable replicas have been successfully exported, the control plane completes the creation and configuration of the volume's nexus, with the replicas as its children. In contrast to their local counterparts, remote replicas are exported, and so connected to by the nexus, over NVMe-F using a user-mode initiator and target implementation from the SPDK.
Write I/O requests to the nexus are handled synchronously; the I/O is dispatched to all (healthy) children and only when completion is acknowledged by all is the I/O acknowledged to the calling initiator via the nexus front-end. Read I/O requests are similarly issued to all children, with just the first response returned to the caller.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Mayastor supports seamless upgrades starting with target version 2.1.0 and later, and source versions 2.0.0 and later. To upgrade from a previous version(1.0.5 or prior) to 2.1.0 or later, visit Legacy Upgrade Support.
From 2.0.x to 2.1.0
The upgrade operation utilises the Mayastor Kubectl Plugin.
The upgrade process is generally non-disruptive for volumes with a replication factor greater than 1 and all replicas being healthy, prior to starting the upgrade.
To upgrade Mayastor deployment on the Kubernetes cluster, execute:
To view all the available options and sub-commands that can be used with the upgrade command, execute:
To view the status of upgrade, execute:
To view the logs of upgrade job, execute:
The time taken to upgrade is directly proportional to the number of Mayastor nodes and Mayastor volumes.
To upgrade to a particular Mayastor version, ensure you are using the same version of kubectl plugin.
The above process of upgrade creates one Job in the namespace where Mayastor is installed, one ClusterRole, one ClusterRoleBinding and one ServiceAccount.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
For basic test and evaluation purposes it may not always be practical or possible to allocate physical disk devices on a cluster to Mayastor for use within its pools. As a convenience, Mayastor supports two disk device type emulations for this purpose:
Memory-Backed Disks ("RAM drive")
File-Backed Disks
Memory-backed Disks are the most readily provisioned if node resources permit, since Mayastor will automatically create and configure them as it creates the corresponding pool. However they are the least durable option - since the data is held entirely within memory allocated to a Mayastor pod, should that pod be terminated and rescheduled by Kubernetes, that data will be lost. Therefore it is strongly recommended that this type of disk emulation be used only for short duration, simple testing. It must not be considered for production use.
File-backed disks, as their name suggests, store pool data within a file held on a file system which is accessible to the Mayastor pod hosting that pool. Their durability depends on how they are configured; specifically on which type of volume mount they are located. If located on a path which uses Kubernetes ephemeral storage (eg. EmptyDir), they may be no more persistent than a RAM drive would be. However, if placed on their own Persistent Volume (eg. a Kubernetes Host Path volume) then they may considered 'stable'. They are slightly less convenient to use than memory-backed disks, in that the backing files must be created by the user as a separate step preceding pool creation. However, file-backed disks can be significantly larger than RAM disks as they consume considerably less memory resource within the hosting Mayastor pod.
Creating a memory-backed disk emulation entails using the "malloc" uri scheme within the Mayastor pool resource definition.
The example shown defines a pool named "mempool-1". The Mayastor pod hosted on "worker-node-1" automatically creates a 64MiB emulated disk for it to use, with the device identifier "malloc0" - provided that at least 64MiB of 2MiB-sized Huge Pages are available to that pod after the Mayastor container's own requirements have been satisfied.
The pool definition caccepts URIs matching the malloc:/// schema within its disks
field for the purposes of provisioning memory-based disks. The general format is:
malloc:///malloc<DeviceId>?<parameters>
Where <DeviceId> is an integer value which uniquely identifies the device on that node, and where the parameter collection <parameters> may include the following:
Note: Memory-based disk devices are not over-provisioned and the memory allocated to them is so from the 2MiB-sized Huge Page resources available to the Mayastor pod. That is to say, to create a 64MiB device requires that at least 33 (32+1) 2MiB-sized pages are free for that Mayastor container instance to use. Satisfying the memory requirements of this disk type may require additional configuration on the worker node and changes to the resource request and limit spec of the Mayastor daemonset, in order to ensure that sufficient resource is available.
Mayastor can use file-based disk emulation in place of physical pool disk devices, by employing the aio:/// URI schema within the pool's declaration in order to identify the location of the file resource.
The examples shown seek to create a pool using a file named "disk1.img", located in the /var/tmp directory of the Mayastor container's file system, as its member disk device. For this operation to succeed, the file must already exist on the specified path (which should be FULL path to the file) and this path must be accessible by the Mayastor pod instance running on the corresponding node.
The aio:/// schema requires no other parameters but optionally, "blk_size" may be specified. Block size accepts a value of either 512 or 4096, corresponding to the emulation of either a 512-byte or 4kB sector size device. If this parameter is omitted the device defaults to using a 512-byte sector size.
File-based disk devices are not over-provisioned; to create a 10GiB pool disk device requires that a 10GiB-sized backing file exist on a file system on an accessible path.
The preferred method of creating a backing file is to use the linux truncate
command. The following example demonstrates the creation of a 1GiB-sized file named disk1.img within the directory /tmp.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The Mayastor pool metrics exporter runs as a sidecar container within every io-engine pod and exposes pool usage metrics in Prometheus format. These metrics are exposed on port 9502 using an HTTP endpoint /metrics and are refreshed every five minutes.
Name | Type | Unit | Description |
---|---|---|---|
To install, add the Prometheus-stack helm chart and update the repo.
Then, install the Prometheus monitoring stack and set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues to false. This enables Prometheus to discover custom ServiceMonitor for Mayastor.
Next, install the ServiceMonitor resource to select services and specify their underlying endpoint objects.
Upon successful integration of the exporter with the Prometheus stack, the metrics will be available on the port 9090 and HTTP endpoint /metrics.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The supportability tool collects Mayastor specific information from the cluster using the kubectl plugin command-line tool. It uses the dump command, which interacts with the Mayastor services to build an archive (ZIP) file that acts as a placeholder for the bundled information.
To bundle Mayastor's complete system information, execute:
To view all the available options and sub-commands that can be used with the dump command, execute:
Note: The information collected by the supportability tool is solely used for debugging purposes. The content of these files is human-readable and can be reviewed, deleted, or redacted as necessary to adhere to the organization's data protection/privacy commitments and security policies before transmitting the bundles. Please refer to the section: 'Does the supportability tool expose sensitive data' to get more details.
The archive files generated by the dump command are stored in the specified output directories. The tables below specify the path and the content that will be stored in each archive file.
Folder Path | Mayastor Resource | Node Name | File Name | Description |
---|---|---|---|---|
The supportability tool generates support bundles, which are used for debugging purposes. These bundles are created in response to the user's invocation of the tool and can be transmitted only by the user. Below is the information collected by the supportability tool that might be identified as 'sensitive' based on the organization's data protection/privacy commitments and security policies. Logs: The default installation of Mayastor includes the deployment of a log aggregation subsystem based on Grafana Loki. All the pods deployed in the same namespace as Mayastor and labelled with openebs.io/logging=true
will have their logs incorporated within this centralized collector. These logs may include the following information:
Kubernetes (K8s) node hostnames
IP addresses
container addresses
API endpoints
Mayastor
K8s
Container names
K8s Persistent Volume names (provisioned by Mayastor)
DiskPool names
Block device details (except the content) K8s Definition Files: The support bundle includes definition files for all the Mayastor components. Some of these are listed below:
Deployments
DaemonSets
StatefulSets
K8s Events: The archive files generated by the supportability tool contain information on all the events of the Kubernetes cluster present in the same namespace as Mayastor.
etcd Dump: The default installation of Mayastor deploys an etcd instance for its exclusive use. This key-value pair is used to persist state information for Mayastor-managed objects. These key-value pairs are required for diagnostic and troubleshooting purposes. The etcd dump archive file consists of the following information:
Kubernetes node hostnames
IP addresses
PVC/PV names
Container names
Mayastor
User applications within the mayastor namespace
Block device details (except data content)
Note:The list provided above is frequently reviewed and updated by the Mayastor maintainers. However, it might not be fully exhaustive, given that "sensitive information" is a subjective term.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
By default, Mayastor collects some basic information related to the number and scale of user-deployed instances. The collected data is anonymous and is encrypted at rest. This data is used to understand storage usage trends, which in turn helps maintainers prioritize their contributions to maximize the benefit to the community as a whole.
No user-identifiable information, hostnames, passwords, or volume data are collected. ONLY the below-mentioned information is collected from the cluster.
A summary of the information collected is given below.
To disable the collection of data metrics from the cluster, add --set obs.callhome.enabled=false
flag to the Helm install command. The Helm command, along with the flag, can either be executed during installation or can be re-executed post-installation.
The collected information is stored on behalf of the OpenEBS project by DataCore Software Inc. in data centers located in Texas, USA.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Mayastor 2.0 enhances High Availability (HA) of the volume target with the nexus switch-over feature. In the event of the target failure, the switch-over feature quickly detects the failure and spawns a new nexus to ensure I/O continuity. The HA feature consists of two components: the HA node agent (which runs in each csi- node) and the cluster agent (which runs alongside the agent-core). The HA node agent looks for io-path failures from applications to their corresponding targets. If any such broken path is encountered, the HA node agent informs the cluster agent. The cluster-agent then creates a new target on a different (live) node. Once the target is created, the node-agent
establishes a new path between the application and its corresponding target. The HA feature restores the broken path within seconds, ensuring negligible downtime.
The volume's replica count must be higher than 1 for a new target to be established as part of switch-over.
We strongly recommend keeping this feature enabled.
The HA feature is enabled by default; to disable it, pass the parameter --set=agents.ha.enabled=false
with the helm install command.
The native NVMe-oF CAS engine of OpenEBS
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Mayastor is a performance optimised "Container Attached Storage" (CAS) solution of the CNCF project . The goal of OpenEBS is to extend Kubernetes with a declarative data plane, providing flexible persistent storage for stateful applications.
Design goals for Mayastor include:
Highly available, durable persistence of data
To be readily deployable and easily managed by autonomous SRE or development teams
To be a low-overhead abstraction for NVMe-based storage devices
Mayastor incorporates Intel's . It has been designed from the ground up to leverage the protocol and compute efficiency of NVMe-oF semantics, and the performance capabilities of the latest generation of solid-state storage devices, in order to deliver a storage abstraction with performance overhead measured to be within the range of single-digit percentages.
By comparison, most "shared everything" storage systems are widely thought to impart an overhead of at least 40% (and sometimes as much as 80% or more) as compared to the capabilities of the underlying devices or cloud volumes; additionally traditional shared storage scales in an unpredictable manner as I/O from many workloads interact and compete for resources.
While Mayastor utilizes NVMe-oF it does not require NVMe devices or cloud volumes to operate and can work well with other device types.
Mayastor's source code and documentation are distributed amongst a number of GitHub repositories under the OpenEBS organisation. The following list describes some of the main repositories but is not exhaustive.
: contains the source code of the data plane components
: contains the source code of the control plane components
: contains common protocol buffer definitions and OpenAPI specifications for Mayastor components
: contains dependencies common to the control and data plane repositories
: contains components and utilities that provide extended functionalities like ease of installation, monitoring and observability aspects
: contains Mayastor's user documenation
Parameter | Function | Value Type | Notes |
---|---|---|---|
Name | Type | Unit | Description |
---|---|---|---|
Folder Path | Host Name(optional) | Component | File Name |
---|---|---|---|
Folder Path | Component | File Name |
---|---|---|
Folder Path | Component | File Name |
---|---|---|
Folder Path | Component | File Name |
---|---|---|
disk_pool_total_size_bytes
Gauge
Integer
Total size of the pool
disk_pool_used_size_bytes
Gauge
Integer
Used size of the pool
disk_pool_status
Gauge
Integer
Status of the pool (0, 1, 2, 3) = {"Unknown", "Online", "Degraded", "Faulted"}
disk_pool_committed_size
Gauge
Integer
Committed size of the pool in bytes
kubelet_volume_stats_available_bytes
Gauge
Integer
Size of the available/usable volume (in bytes)
kubelet_volume_stats_capacity_bytes
Gauge
Integer
The total size of the volume (in bytes)
kubelet_volume_stats_used_bytes
Gauge
Integer
Used size of the volume (in bytes)
kubelet_volume_stats_inodes
Gauge
Integer
The total number of inodes
kubelet_volume_stats_inodes_free
Gauge
Integer
The total number of usable inodes.
kubelet_volume_stats_inodes_used
Gauge
Integer
The total number of inodes that have been utilized to store metadata.
./topology/node
node
node-01
node-01-topology.json
Topology of node-01(All node topologies will available here)
./topology/pool
pool
pool-01
pool-01-topology.json
Topology of pool-01 (All pool topologies will available here)
./topology/volume
volume
volume-01
volume-01-topology.json
Topology information of volume-01 (All volume topologies will available here)
./logs/core-agents
-
agent-core
loki-agent-core.log
./logs/rest
-
api-rest
loki-api-rest.log
./logs/csi-controller
-
csi-attacher
loki-csi-attacher.log
./logs/csi-controller
-
csi-controller
loki-csi-controller.log
./logs/csi-controller
-
csi-provisioner
loki-csi-provisioner.log
./logs/diskpool-operator
-
operator-diskpool
loki-operator-disk-pool.log
./logs/mayastor
node-02
csi-driver-registrar
node-02-loki-csi-driver-registrar.log
./logs/mayastor
node-01
csi-node
node-01-loki-csi-node.log
./logs/mayastor
node-01
io-engine
node-01-loki-mayastor.log
./logs/blot
node-02
io-engine
node-02-loki-mayastor.log
./logs/etcd
node-03
etcd
node-03-loki-etcd.log
./k8s_resources/configurations/
agent-core (Deployment)
mayastor-agent-core.yaml
./k8s_resources/configurations/
api-rest
mayastor-api-rest.yaml
./k8s_resources/configurations/
si-controller (Deployment)
mayastor-csi-controller.yaml
./k8s_resources/configurations/
csi-node(Daemonset)
mayastor-csi-node.yaml
./k8s_resources/configurations/
etcd (Statefullset)
mayastor-etcd.yaml
./k8s_resources/configurations/
loki (Statefullset)
mayastor-loki-yaml
./k8s_resources/configurations/
operator-diskpool
mayastor-operator-disk-pool.yaml
./k8s_resources/configurations/
promtail(Daemonset)
mayastor-promtail.yaml
./k8s_resources/configurations/
io-engine (Daemonset)
io-engine.yaml
./k8s_resources/configurations/
disk_pools
k8s_diskPools.yaml
./k8s_resources/configurations/
events
k8s_events.yaml
./k8s_resources/configurations/
all pods(deployed under the same namespace as Mayastor)
pods.yaml
./
etcd
etcd_dump
./
Support-tool
support_tool_logs.log
size_mb
Specifies the requested size of the device in MiB
Integer
Mutually exclusive with "num_blocks"
num_blocks
Specifies the requested size of the device in terms of the number of addressable blocks
Integer
Mutually exclusive with "size_mb"
blk_size
Specifies the block size to be reported by the device in bytes
Integer (512 or 4096)
Optional. If not used, block size defaults to 512 bytes
Replica Information |
Replica count: This is the number of Mayastor Volume replicas in your cluster. |
Average replica count per volume: This is the average number of replicas each Mayastor Volume has in your cluster. |
Cluster information |
K8s cluster ID: This is a SHA-256 hashed value of the UID of your Kubernetes cluster's |
K8s node count: This is the number of nodes in your Kubernetes cluster. |
Product name: This field displays the name Mayastor |
Product version: This is the deployed version of Mayastor. |
Deploy namespace: This is a SHA-256 hashed value of the name of the Kubernetes namespace where Mayastor Helm chart is deployed. |
Storage node count: This is the number of nodes on which the Mayastor I/O engine is scheduled. |
Pool information |
Pool count: This is the number of Mayastor DiskPools in your cluster. |
Pool maximum size: This is the capacity of the Mayastor DiskPool with the highest capacity. |
Pool minimum size: This is the capacity of the Mayastor DiskPool with the lowest capacity. |
Pool mean size: This is the average capacity of the Mayastor DiskPools in your cluster. |
Pool capacity percentiles: This calculates and returns the capacity distribution of Mayastor DiskPools for the 50th, 75th and the 90th percentiles. |
Volume information |
Volume count: This is the number of Mayastor Volumes in your cluster. |
Volume minimum size: This is the capacity of the Mayastor Volume with the lowest capacity. |
Volume mean size: This is the average capacity of the Mayastor Volumes in your cluster. |
Volume capacity percentiles: This calculates and returns the capacity distribution of Mayastor Volumes for the 50th, 75th and the 90th percentiles. |
Linux | Distribution:Ubuntu Version: 20.04 LTS Kernel version: 5.13.0-27-generic |
Windows | Not supported |
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The correct set of log file to collect depends on the nature of the problem. If unsure, then it is best to collect log files for all Mayastor containers. In nearly every case, the logs of all of the control plane component pods will be needed;
csi-controller
core-agent
rest
msp-operator
Mayastor containers form the data plane of a Mayastor deployment. A cluster should schedule as many mayastor container instances as required storage nodes have been defined. This log file is most useful when troubleshooting I/O errors however, provisioning and management operations might also fail because of a problem on a storage node.
If experiencing problems with (un)mounting a volume on an application node, this log file can be useful. Generally all worker nodes in the cluster will be configured to schedule a mayastor CSI agent pod, so it's good to know which specific node is experiencing the issue and inspect the log file only for that node.
These containers implement the CSI spec for Kubernetes and run within the same pods as the csi-controller and mayastor-csi (node plugin) containers. Whilst they are not part of Mayastor's code, they can contain useful information when a Mayastor CSI controller/node plugin fails to register with k8s cluster.
A coredump is a snapshot of process' memory combined with auxiliary information (PID, state of registers, etc.) and saved to a file. It is used for post-mortem analysis and it is generated automatically by the operating system in case of a severe, unrecoverable error (i.e. memory corruption) causing the process to panic. Using a coredump for a problem analysis requires deep knowledge of program internals and is usually done only by developers. However, there is a very useful piece of information that users can retrieve from it and this information alone can often identify the root cause of the problem. That is the stack (backtrace) - a record of the last action that the program was performing at the time when it crashed. Here we describe how to get it. The steps as shown apply specifically to Ubuntu, other linux distros might employ variations.
We rely on systemd-coredump that saves and manages coredumps on the system, coredumpctl
utility that is part of the same package and finally the gdb
debugger.
If installed correctly then the global core pattern will be set so that all generated coredumps will be piped to the systemd-coredump
binary.
If there is a new coredump from the mayastor container, the coredump alone won't be that useful. GDB needs to access the binary of crashed process in order to be able to print at least some information in the backtrace. For that, we need to copy the contents of the container's filesystem to the host.
Now we can start GDB. Don't use the coredumpctl
command for starting the debugger. It invokes GDB with invalid path to the debugged binary hence stack unwinding fails for Rust functions. At first we extract the compressed coredump.
Once in GDB we need to set a sysroot so that GDB knows where to find the binary for the debugged program.
After that we can print backtrace(s).
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
A legacy installation of Mayastor (1.0.5 and below) cannot be seamlessly upgraded and needs manual intervention. Follow the below steps if you wish to upgrade from Mayastor 1.0.x to Mayastor 2.1.0 and above. Mayastor uses etcd as a persistent datastore for its configuration. As a first step, take a snapshot of the etcd. The detailed steps for taking a snapshot can be found in the etcd documentation.
As compared to Mayastor 1.0, the Mayastor 2.0 feature-set introduces breaking changes in some of the components, due to which the upgrade process from 1.0 to 2.0 is not seamless. The list of such changes are given below: ETCD:
Control Plane: The prefixes for control plane have changed from /namespace/$NAMESPACE/control-plane/
to /openebs.io/mayastor/apis/v0/clusters/$KUBE_SYSTEM_UID/namespaces/$NAMESPACE/
Data Plane: The Data Plane nexus information containing a list of healthy children has been moved from $nexus_uuid
to /openebs.io/mayastor/apis/v0/clusters/$KUBE_SYSTEM_UID/namespaces/$NAMESPACE/volume/$volume_uuid/nexus/$nexus_uuid/info
RPC:
Control Plane: The RPC for the control plane has been changed from NATS to gRPC.
Data Plane: The registration heartbeat has been changed from NATS to gRPC.
Pool CRDs:
The pool CRDs have been renamed DiskPools
(previously, MayastorPools).
In order to start the upgrade process, the following previously deployed components have to be deleted.
To delete the control-plane components, execute:
Next, delete the associated RBAC operator. To do so, execute:
In the above command, add the previously installed Mayastor version in the format v1.x.x
Once all the above components have been successfully removed, fetch the latest helm chart from Mayastor-extension repo and save it to a file, say helm_templates.yaml
. To do so, execute:
Next, update the helm_template.yaml
file, add the following helm label to all the resources that are being created.
Copy the etcd
and io-engine
spec from the helm_templates.yaml
and save it in two different files say, mayastor_2.0_etcd.yaml and mayastor_io_v2.0.yaml. Once done, remove the etcd
and io-engine
specs from helm_templates.yaml
. These components need to be upgraded separately.
Install the new control-plane components using the helm-templates.yaml
file.
In the above method of installation, the nexus (target) High Availability (HA) is disabled by default. The steps to enable HA are described later in this document.
Verify the status of the pods. Upon successful deployment, all the pods will be in a running state.
Verify the etcd prefix and compat mode.
Verify if the DiskPools are online.
Next, verify the status of the volumes.
After upgrading control-plane components, the data-plane pods have to be upgraded. To do so, deploy the io-engine
DaemonSet from Mayastor's new version.
Using the command given below, the data-plane pods (now io-engine pods) will be upgraded to Mayastor v2.0.
Delete the previously deployed data-plane pods (mayastor-xxxxx
). The data-plane pods need to be manually deleted as their update-strategy is set to delete
. Upon successful deletion, the new io-engine
pods will be up and running.
NATS has been replaced by gRPC for Mayastor versions 2.0 or later. Hence, the NATS components (StatefulSets and services) have to be removed from the cluster.
After control-plane
and io-engine
, the etcd has to be upgraded. Before starting the etcd upgrade, label the etcd PV and PVCs with helm. Use the below example to create a labels.yaml
file. This will be needed to make them helm compatible.
Next, deploy the etcd YAML. To deploy, execute:
Now, verify the etcd space and compat mode, execute:
Once all the components have been upgraded, the HA module can now be enabled via the helm upgrade command.
This concludes the process of legacy upgrade. Run the below commands to verify the upgrade,
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The Mayastor process has been sent the SIGILL signal as the result of attempting to execute an illegal instruction. This indicates that the host node's CPU does not satisfy the prerequisite instruction set level for Mayastor (SSE4.2 on x86-64).
In addition to ensuring that the general prerequisites for installation are met, it is necessary to add the following directory mapping to the services_kublet->extra_binds
section of the cluster'scluster.yml file.
If this is not done, CSI socket paths won't match expected values and the Mayastor CSI driver registration process will fail, resulting in the inability to provision Mayastor volumes on the cluster.
If the disk device used by a Mayastor pool becomes inaccessible or enters the offline state, the hosting Mayastor pod may panic. A fix for this behaviour is under investigation.
When rebooting a node that runs applications mounting Mayastor volumes, this can take tens of minutes. The reason is the long default NVMe controller timeout (ctrl_loss_tmo
). The solution is to follow the best k8s practices and cordon the node ensuring there aren't any application pods running on it before the reboot. Setting ioTimeout
storage class parameter can be used to fine-tune the timeout.
Deploying an application pod on a worker node which hosts Mayastor and Prometheus exporter causes that node to restart. The issue originated because of a kernel bug. Once the nexus disconnects, the entries under /host/sys/class/hwmon/
should get removed, which does not happen in this case(The issue was fixed via this kernel patch).
Fix: Use kernel version 5.13 or later if deploying Mayastor in conjunction with the Prometheus metrics exporter.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Once provisioned, neither Mayastor Disk Pools nor Mayastor Volumes can be re-sized. A Mayastor Pool can have only a single block device as a member. Mayastor Volumes are exclusively thick-provisioned.
Mayastor has no snapshot or cloning capabilities.
Mayastor Volumes can be configured (or subsequently re-configured) to be composed of 2 or more "children" or "replicas"; causing synchronously mirrored copies of the volumes's data to be maintained on more than one worker node and Disk Pool. This contributes additional "durability" at the persistence layer, ensuring that viable copies of a volume's data remain even if a Disk Pool device is lost.
A Mayastor volume is currently accessible to an application only via a single target instance (NVMe-oF) of a single Mayastor pod. However, if that Mayastor pod ceases to run (through the loss of the worker node on which it's scheduled, execution failure, crashloopbackoff etc.) the HA switch-over module detects the failure and moves the target to a healthy worker node to ensure I/O continuity.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
The ‘Mayastor kubectl plugin’ can be used to view and manage Mayastor resources such as nodes, pools and volumes. It is also used for operations such as scaling the replica count of volumes.
The Mayastor kubectl plugin is available for the Linux platform. The binary for the plugin can be found here.
Add the downloaded Mayastor kubectl plugin under $PATH.
To verify the installation, execute:
The plugin can be used to get the following resource information:
Volumes
Pools
Nodes
All the above resource information can be retrieved for a particular resource using its ID. The command to do so is as follows: kubectl mayastor get <resource_name> <resource_id>
The Mayastor kubectl plugin can also be used for performing the following operations:
Scaling the replica count of a volume
Retrieving resource specs in any desired format
Retrieving replica topology for specific volumes.
The plugin requires access to the Mayastor REST server
for execution. It gets the master node IP from the kube-config file. In case of any failure, the REST endpoint can be specified using the ‘–rest’ flag.
The plugin currently does not have authentication support.
The plugin can operate only over HTTP.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
When a Mayastor volume is provisioned based on a StorageClass which has a replication factor greater than one (set by its repl
parameter), the control plane will attempt to maintain through a 'Kubernetes-like' reconciliation loop that number of identical copies of the volume's data "replicas" (a replica is a nexus target "child") at any point in time. When a volume is first provisioned the control plane will attempt to create the required number of replicas, whilst adhering to its internal heuristics for their location within the cluster (which will be discussed shortly). If it succeeds, the volume will become available and will bind with the PVC. If the control plane cannot identify a sufficient number of eligble Mayastor Pools in which to create required replicas at the time of provisioning, the operation will fail; the Mayastor Volume will not be created and the associated PVC will not be bound. Kubernetes will periodically re-try the volume creation and if at any time the appropriate number of pools can be selected, the volume provisioning should succeed.
Once a volume is processing I/O, each of its replicas will also receive I/O. Reads are round-robin distributed across replicas, whilst writes must be written to all. In a real world environment this is attended by the possibility that I/O to one or more replicas might fail at any time. Possible reasons include transient loss of network connectivity, node reboots, node or disk failure. If a volume's nexus (NVMe controller) encounters 'too many' failed I/Os for a replica, then that replica's child status will be marked Faulted
and it will no longer receive I/O requests from the nexus. It will remain a member of the volume, whose departure from the desired state with respect to replica count will be reflected with a volume status of Degraded
. How many I/O failures are considered "too many" in this context is outside the scope of this discussion.
The control plane will first 'retire' the old, faulted one which will then no longer be associated to the volume. Once retired, a replica will become available for garbage collection (deletion from the Mayastor Pool containing it), assuming that the nature of the failure was such that the pool itself is still viable (i.e. the underlying disk device is still accessible). Then it will attempt to restore the desired state (replica count) by creating a new replica, following its replica placement rules. If it succeeds, the nexus will "rebuild" that new replica - performing a full copy of all data from a healthy replica Online
, i.e. the source. This process can proceed whilst the volume continues to process application I/Os although it will contend for disk throughput at both the source and destination disks.
If a nexus is cleanly restarted, i.e. the Mayastor pod hosting it restarts gracefully, with the assistance of the control plane it will 'recompose' itself; all of the previous healthy member replicas will be re-attached to it. If previously faulted replicas are available to be re-connected (Online
), then the control plane will attempt to reuse and rebuild them directly, rather than seek replacements for them first. This edge-case therefore does not result in the retirement of the affected replicas; they are simply reused. If the rebuild fails then we follow the above process of removing a Faulted
replica and adding a new one. On an unclean restart (i.e. the Mayastor pod hosting the nexus crashes or is forcefully deleted) only one healthy replica will be re-attached and all other replicas will eventually be rebuilt.
Once provisioned, the replica count of a volume can be changed using the kubectl-mayastor plugin scale
subcommand. The value of the num_replicas
field may be either increased or decreased by one and the control plane will attempt to satisfy the request by creating or destroying a replicas as appropriate, following the same replica placement rules as described herein. If the replica count is reduced, faulted replicas will be selected for removal in preference to healthy ones.
Accurate predictions of the behaviour of Mayastor with respect to replica placement and management of replica faults can be made by reference to these 'rules', which are a simplified representation of the actual logic:
"Rule 1": A volume can only be provisioned if the replica count (and capacity) of its StorageClass can be satisfied at the time of creation
"Rule 2": Every replica of a volume must be placed on a different Mayastor Node)
"Rule 3": Children with the state Faulted
are always selected for retirement in preference to those with state Online
N.B.: By application of the 2nd rule, replicas of the same volume cannot exist within different pools on the same Mayastor Node.
A cluster has two Mayastor nodes deployed, "Node-1" and "Node-2". Each Mayastor node hosts two Mayastor pools and currently, no Mayastor volumes have been defined. Node-1 hosts pools "Pool-1-A" and "Pool-1-B", whilst Node-2 hosts "Pool-2-A and "Pool-2-B". When a user creates a PVC from a StorageClass which defines a replica count of 2, the Mayastor control plane will seek to place one replica on each node (it 'follows' Rule 2). Since in this example it can find a suitable candidate pool with sufficient free capacity on each node, the volume is provisioned and becomes "healthy" (Rule 1). Pool-1-A is selected on Node-1, and Pool-2-A selected on Node-2 (all pools being of equal capacity and replica count, in this initial 'clean' state).
Sometime later, the physical disk of Pool-2-A encounters a hardware failure and goes offline. The volume is in use at the time, so its nexus (NVMe controller) starts to receive I/O errors for the replica hosted in that Pool. The associated replica's child from Pool-2-A enters the Faulted
state and the volume state becomes Degraded
(as seen through the kubectl-mayastor plugin).
Expected Behaviour: The volume will maintain read/write access for the application via the remaining healthy replica. The faulty replica from Pool-2-A will be removed from the Nexus thus changing the nexus state to Online
as the remaining is healthy. A new replica is created on either Pool-2-A or Pool-2-B and added to the nexus. The new replica child is rebuilt and eventually the state of the volume returns to Online
.
A cluster has three Mayastor nodes deployed, "Node-1", "Node-2" and "Node-3". Each Mayastor node hosts one pool: "Pool-1" on Node-1, "Pool-2" on Node-2 and "Pool-3" on Node-3. No Mayastor volumes have yet been defined; the cluster is 'clean'. A user creates a PVC from a StorageClass which defines a replica count of 2. The control plane determines that it is possible to accommodate one replica within the available capacity of each of Pool-1 and Pool-2, and so the volume is created. An application is deployed on the cluster which uses the PVC, so the volume receives I/O.
Unfortunately, due to user error the SAN LUN which is used to persist Pool-2 becomes detached from Node-2, causing I/O failures in the replica which it hosts for the volume. As with scenario one, the volume state becomes Degraded
and the faulted child's becomes Faulted
.
Expected Behaviour: Since there is a Mayastor pool on Node-3 which has sufficient capacity to host a replacement replica, a new replica can be created (Rule 2: this 'third' incoming replica isn't located on either of the nodes that the two original ones are). The faulted replica in Pool-2 is retired from the nexus and a new replica is created on Pool-3 and added to the nexus. The new replica is rebuilt and eventually the state of the volume returns to Online
.
In the cluster from Scenario three, sometime after the Mayastor volume has returned to the Online
state, a user scales up the volume, increasing the num_replicas
value from 2 to 3. Before doing so they corrected the SAN misconfiguration and ensured that the pool on Node-2 was Online
.
Expected Behaviour: The control plane will attempt to reconcile the difference in current (replicas = 2) and desired (replicas = 3) states. Since Node-2 no longer hosts a replica for the volume (the previously faulted replica was successfully retired and is no longer a member of the volume's nexus), the control plane will select it to host the new replica required (Rule 2 permits this). The volume state will become initially Degraded
to reflect the difference in actual vs required redundant data copies but a rebuild of the new replica will be performed and eventually the volume state will be Online
again.
A cluster has three Mayastor nodes deployed; "Node-1", "Node-2" and "Node-3". Each Mayastor node hosts two Mayastor pools and currently, no Mayastor volumes have been defined. Node-1 hosts pools "Pool-1-A" and "Pool-1-B", whilst Node-2 hosts "Pool-2-A and "Pool-2-B" and Node-3 hosts "Pool-3-A" and "Pool-3-B". A single volume exists in the cluster, which has a replica count of 3. The volume's replicas are all healthy and are located on Pool-1-A, Pool-2-A and Pool-3-A. An application is using the volume, so all replicas are receiving I/O.
The host Node-3 goes down causing failure of all I/O sent to the replica it hosts from Pool-3-A.
Expected Behaviour: The volume will enter and remain in the Degraded
state. The associated child from the replica from Pool-3-A will be in the state Faulted
, as observed in the volume through the kubectl-mayastor plugin. Said replica will be removed from the Nexus thus changing the nexus state to Online
as the other replicas are healthy. The replica will then be disowned from the volume (it won't be possible to delete it since the host is down). Since Rule 2 dictates that every replica of a volume must be placed on a different Mayastor Node no new replica can be created at this point and the volume remains Degraded
indefinitely.
Given the post-host failure situation of Scenario four, the user scales down the volume, reducing the value of num_replicas
from 3 to 2.
Expected Behaviour: The control plane will reconcile the actual (replicas=3) vs desired (replicas=2) state of the volume. The volume state will become Online
again.
In scenario Five, after scaling down the Mayastor volume the user waits for the volume state to become Online
again. The desired and actual replica count are now 2. The volume's replicas are located in pools on both Node-1 and Node-2. The Node-3 is now back up and its pools Pool-3-A and Pool-3-B are Online
. The user then scales the volume again, increasing the num_replicas
from 2 to 3 again.
Expected Behaviour: The volume's state will become Degraded
, reflecting the difference in desired vs actual replica count. The control plane will select a pool on Node-3 as the location for the new replica required. Node-3 is therefore again a suitable candidate and has online pools with sufficient capacity.
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Mayastor will fully utilize each CPU core that it was configured to run on. It will spawn a thread on each and the thread will run in an endless loop serving tasks dispatched to it without sleeping or blocking. There are also other Mayastor threads that are not bound to the CPU and those are allowed to block and sleep. However, the bound threads (also called reactors) rely on being interrupted by the kernel and other userspace processes as little as possible. Otherwise, the latency of IO may suffer.
Ideally, the only thing that interrupts Mayastor's reactor would be only kernel time-based interrupts responsible for CPU accounting. However, that is far from trivial. isolcpus
option that we will be using does not prevent:
kernel threads and
other k8s pods to run on the isolated CPU
However, it prevents system services including kubelet from interfering with Mayastor.
Note that the best way to accomplish this step may differ, based on the Linux distro that you are using.
Add the isolcpus
kernel boot parameter to GRUB_CMDLINE_LINUX_DEFAULT
in the grub configuration file, with a value which identifies the CPUs to be isolated (indexing starts from zero here). The location of the configuration file to change is typically /etc/default/grub
but may vary. For example when running Ubuntu 20.04 in AWS EC2 Cloud boot parameters are in /etc/default/grub.d/50-cloudimg-settings.cfg
.
In the following example we assume a system with 4 CPU cores in total, and that the third and the fourth CPU cores are to be dedicated to Mayastor.
Basic verification is by outputting the boot parameters of the currently running kernel:
You can also print a list of isolated CPUs:
Edit the mayastor-daemonset.yaml
file and set the -l
parameter of mayastor to specify CPU cores that Mayastor reactors should run on. In the following example we run mayastor on the third and fourth CPU core:
In the above command, io_engine.coreList={3,4}
specifies that Mayastor's reactors should operate on the third and fourth CPU cores.
ef5f92a (docs: added EOL info panel (#216))
This website/page will be End-of-life (EOL) after 31 August 2024. We recommend you to visit OpenEBS Documentation for the latest Mayastor documentation (v2.6 and above).
Mayastor is now also referred to as OpenEBS Replicated PV Mayastor.
Mayastor's storage engine supports synchronous mirroring to enhance the durability of data at rest within whatever physical persistence layer is in use. When volumes are provisioned which are configured for replication (a user can control the count of active replicas which should be maintained, on a per StorageClass basis), write I/O operations issued by an application to that volume are amplified by its controller ("nexus") and dispatched to all its active replicas. Only if every replica completes the write successfully on its own underlying block device will the I/O completion be acknowledged to the controller. Otherwise, the I/O is failed and the caller must make its own decision as to whether it should be retried. If a replica is determined to have faulted (I/O cannot be serviced within the configured timeout period, or not without error), the control plane will automatically take corrective action and remove it from the volume. If spare capacity is available within a Mayastor pool, a new replica will be created as a replacement and automatically brought into synchronisation with the existing replicas. The data path for a replicated volume is described in more detail here
This Mayastor documentation contains sections which are focused on initial, 'quickstart' deployment scenarios, including the correct configuration of underlying hardware and software, and of Mayastor features such as "Storage Nodes" (MSNs) and "Disk Pools" (MSPs). Information describing tuning for the optimisation of performance is also provided.
Mayastor has been built to leverage the performance potential of contemporary, high-end, solid state storage devices as a foremost design consideration. For this reason, the I/O path is predicated on NVMe, a transport which is both highly CPU efficient and which demonstrates highly linear resource scaling. The data path runs entirely within user space, also contributing efficiency gains as syscalls are avoided, and is both interrupt and lock free.
MayaData has performed its own benchmarking tests in collaboration with Intel, using latest generation Intel P5800X Optane devices "The World's Fastest Data Centre SSD". In those tests it was determined that, on average, across a range of read/write ratios and both with and without synchronous mirroring enabled, the overhead imposed by the Mayastor I/O path was well under 10% (in fact, much closer to 6%).
Further information regarding the testing performed may be found here
Mayastor makes use of parts of the open source Storage Performance Development Kit (SPDK) project, contributed by Intel. Mayastor's Storage Pools use the SPDK's Blobstore structure as their on-disk persistence layer. Blobstore structures and layout are documented.
Since the replicas (data copies) of Mayastor volumes are held entirely within Blobstores, it is not possible to directly access the data held on pool's block devices from outside of Mayastor. Equally, Mayastor cannot directly 'import' and use existing volumes which aren't of Mayastor origin. The project's maintainers are considering alternative options for the persistence layer which may support such data migration goals.
The size of a Mayastor Pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions.
The replica placement logic of Mayastor's control plane doesn't permit replicas of the same volume to be placed onto the same node, even if it were to be within different Disk Pools. For example, if a volume with replication factor 3 is to be provisioned, then there must be three healthy Disk Pools available, each with sufficient free capacity and each located on its own Mayastor node. Further enhancements to topology awareness are under consideration by the maintainers.
The Mayastor kubectl plugin is used to obtain this information.
No. This may be a feature of future releases.
Mayastor does not peform asynchronous replication.
Mayastor pools do not implement any form of RAID, erasure coding or striping. If higher levels of data redundancy are required, Mayastor volumes can be provisioned with a replication factor of greater than 1, which will result in synchronously mirrored copies of their data being stored in multiple Disk Pools across multiple Storage Nodes. If the block device on which a Disk Pool is created is actually a logical unit backed by its own RAID implementation (e.g. a Fibre Channel attached LUN from an external SAN) it can still be used within a Mayastor Disk Pool whilst providing protection against physical disk device failures.
No.
No but these may be features of future releases.
Mayastor nightly builds and releases are compiled and tested on x86-64, under Ubuntu 20.04 LTS with a 5.13 kernel. Some effort has been made to allow compilation on ARM platforms but this is currently considered experimental and is not subject to integration or end-to-end testing by Mayastor's maintainers.
Minimum hardware requirements are discussed in the quickstart section of this documentation.
Mayastor does not run on Raspbery Pi as the version of the SPDK used by Mayastor requires ARMv8 Crypto extensions which are not currently available for Pi.
Mayastor, as any other solution leveraging TCP for network transport, may suffer from network congestion as TCP will try to slow down transfer speeds. It is important to keep an eye on networking and fine-tune TCP/IP stack as appropriate. This tuning can include (but is not limited to) send and receive buffers, MSS, congestion control algorithms (e.g. you may try DCTCP) etc.
Mayastor has been designed so as to be able to leverage the peformance capabilities of contemporary high-end solid-state storage devices. A significant aspect of this is the selection of a polling based I/O service queue, rather than an interrupt driven one. This minimises the latency introduced into the data path but at the cost of additional CPU utilisation by the "reactor" - the poller operating at the heart of the Mayastor pod. When Mayastor pods have been deployed to a cluster, it is expected that these daemonset instances will make full utilization of their CPU allocation, even when there is no I/O load on the cluster. This is simply the poller continuing to operate at full speed, waiting for I/O. For the same reason, it is recommended that when configuring the CPU resource limits for the Mayastor daemonset, only full, not fractional, CPU limits are set; fractional allocations will also incur additional latency, resulting in a reduction in overall performance potential. The extent to which this performance degradation is noticeable in practice will depend on the performance of the underlying storage in use, as well as whatvever other bottlenecks/constraints may be present in the system as cofigured.
The supportability tool generates support bundles, which are used for debugging purposes. These bundles are created in response to the user's invocation of the tool and can be transmitted only by the user. To view the list of collected information, visit the supportability section.
retain
is deleted?In Kubernetes, when a PVC is created with the reclaim policy set to 'Retain', the PV bound to this PVC is not deleted even if the PVC is deleted. One can manually delete the PV by issuing the command "kubectl delete pv ", however the underlying storage resources could be left behind as the CSI volume provisioner (external provisioner) is not aware of this. To resolve this issue of dangling storage objects, Mayastor has introduced a PV garbage collector. This PV garbage collector is deployed as a part of the Mayastor CSI controller-plugin.
The PV garbage collector deploys a watcher component, which subscribes to the Kubernetes Persistent Volume deletion events. When a PV is deleted, an event is generated by the Kubernetes API server and is received by this component. Upon a successful validation of this event, the garbage collector deletes the corresponding Mayastor volume resources.