Kubernetes

The Delphix Kubernetes (K8s) Driver enables Delphix administrators or application teams to provision and manage virtual data sets into containerized applications that are orchestrated in Kubernetes clusters. The virtual datasets provided by Delphix can endure, despite Kubernetes infrastructure and applications being ephemeral. This allows you to remain in complete control of your testable data without having to worry about rising infrastructure costs or mismanaged data. 

This solution mounts a pre-populated filesystem into a user-specified container pod. Internally, the solution leverages the standard Kubernetes concepts of Persistent Volume (PV) and Persistent Volume Claim (PVC). The Delphix K8s driver which leverages Container Storage Interface (CSI) driver standards, is generalized and thus is capable of supporting any database product that can be launched from a container and expects a single filesystem for its persistent storage tier. Delphix’s virtualization capabilities are integrated tightly into common Kubernetes commands.
kubernetes architecture

You can expect the following benefits after implementing this solution:

  • Reduced infrastructure costs: The Delphix K8s Driver eliminates the need for target environments potentially saving infrastructure costs. In addition, integration with Kubernetes ensures the necessary infrastructure is always “right-sized” without any wasted resources, regardless of the provider.

  • Improved agility: The driver is fully compatible with standard Kubernetes toolsets, such as Helm and kubectl. This means application teams and administrators can easily extend their existing processes and automation. You can quickly gain access to ephemeral, production-like datasets without context switching, manual intervention, and significant overhead.

  • Centralized governance model: All datasets, containerized or not, are managed by the Delphix DevOps Data Platform. IT Administrators do not need to manage multiple toolsets that interface with different cloud and legacy platforms. Dataset access, authorization, and operations are managed within a single platform.

Getting started

Perform the following steps to implement the Delphix K8s Driver.

System requirements

The following components are required to configure the Delphix K8s Driver:

Compatibility

The Kubernetes solution is composed of a K8s Driver and a K8s Connector. These two components can be managed separately. Ensure compatible installations are being used by following the below table.

Release

Driver

Connector

v1.3.0

1.3.0

1.0.0

v1.2.0

1.2.0

1.0.0

v1.1.0

1.1.0

1.0.0

v1.0.0

1.0.0

1.0.0

Prerequisites

kubernetes architecture

  1. Install Data Control Tower (DCT) and Delphix Continuous Data. Ensure DCT is connected to Delphix Continuous Data and syncing properly. For more information, refer to Data Control Tower and Delphix Continuous Data Engine’s documentation.

  2. Install the Kubernetes Driver plugin into Delphix Continuous Data. For more information, refer to the Installation section.

  3. Ingest the source data into Delphix Continuous Data. For more information, refer to the Ingestion architecture section.

  4. Install the Kubernetes driver and create the storage class. For more information, refer to the Installation section.

  • The application deployment environment must have access to the five underlying images of the Delphix K8s Driver (1 image provided by Delphix and 4 sidecar images provided by Kubernetes). These images are made available on internet-hosted repositories.

  • The storage class only has to be created once per Kubernetes cluster and/or application. It can be ignored in subsequent deployments if the same configuration details are expected.

Once the prerequisites have been completed, perform the below steps via kubectl. The first set of steps demonstrates how to provision the virtual database (request the PVC) and then enable it (claim the PVC), while the second demonstrates how to shut it down.

Kubectl

Once the prerequisites have been completed, Perform the below steps via kubectl. The first set of steps demonstrates how to provision the virtual database (request the PVC) and then enable it (claim the PVC), while the second demonstrates how to shut it down.

Provision and enable

  1. Create a Persistent Volume Claim (PVC).

    kubectl apply -f <pvc-manifest>.yaml

The <pvc_manifest.yaml> file specifies the new VDB to provision based on the storage class created during the installation step. Therefore, it must list the default.annotations properties listed in the Helm Chart Parameters section and the storageClassName.

Provisioning from a bookmark is made available through the storageClassName object. See below for more details.

For example,

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-vdb
  annotations:
    sourceDBName: aks-pg12
    envName: surrogate-host-env
    vdbGroupName: Untitled 
    vdbRepositoryName: Empty vFile Repository
    vdbStageMountpath: /mnt/test-vdb
    engineName: myengine.delphix.com
spec:
  storageClassName: de-fs-storage
  accessModes:
    - ReadWriteOncePod
  resources:
    requests:
      storage: 5Gi
  1. (Optional) Other VDB values, such as a tag, can be added to the annotations section. See the Helm chart parameters page for a complete list.
    For example,

      annotations:
        ...
        tags: "exampletag1=examplevalue1, exampletag2= examplevalue2"    ...
  2. Deploy the containerized database and request the Persistent Volume Claim (PVC).

kubectl apply -f <container-manifest>.yaml

The <container-manifest.yaml> file will enable the VDB and claim the persistent volume created in the prior step. Therefore, the created volume’s claimName should match the prior step’s name value, and the volume should be mounted to the chosen container. For example,

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRES_DB: postgresdb
  POSTGRES_USER: admin
  POSTGRES_PASSWORD: test123
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:12.5
          securityContext:
              runAsGroup: 26
              runAsUser: 26
              privileged: true
          imagePullPolicy: "IfNotPresent"          ports:
            - containerPort: 5432
          envFrom:
            - configMapRef:
                name: postgres-config
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postdb
      volumes:
        - name: postdb
          persistentVolumeClaim:
            claimName: test-vdb
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  type: NodePort
  ports:
   - port: 5432
  selector:
   app: postgres
  1. Once the containerized database is running, you may attach it to your desired application. That application can live in the same Kubernetes Node, a different Kubernetes Node, or a completely separate IaaS location.

Bookmark and snapshots

All Kubernetes volume snapshots are equivalent to Data Control Tower bookmarks. When you create or delete a volume snapshot, it creates or deletes an equivalent Data Control Tower bookmark. Follow the below steps to create and manage a snapshot. All configuration properties are available on the Helm chart parameters documentation page.

  1. Create a Volume Snapshot Class

    kubectl apply -f volumesnapshotclass.yaml

    Sample volumesnapshotclass.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
     name: snapshot-storage-class
    driver: defs.csi.delphix.com
    deletionPolicy: Delete
    parameters:
      tagkey1: "examplevalue1"  tagkey2: "examplevalue2"
Key-value pairs are added as DCT bookmark tags instead of the Kubernetes snapshot tags. Every DCT bookmark created from the VolumeSnapshotClass will have these tags. If different tags are required, you must create a new VolumeSnapshotClass.
  1. Create a Volume Snapshot (DCT Bookmark)

    kubectl apply -f volumesnapshot.yaml

    Sample volumesnapshot.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
     name: vdb-snapshot-example
    spec:
     volumeSnapshotClassName: snapshot-storage-class
     source:
       persistentVolumeClaimName: samplevdbname
Creating a volume snapshot is an asynchronous command. Therefore, the volume snapshot (DCT bookmark) is only available for use once the value of the READYTOUSE field is set true.
  1. Provision from a volume snapshot (DCT bookmark)

    kubectl apply -f pvc_usingsnapshot.yaml

    Sample pvc_usingsnapshot.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: samplevdbnameusingsnapshot
      annotations:
        sourceDBName: dsourcename
        envName: surrogate-host-env
        vdbGroupName: samplegroup
        vdbRepositoryName: Empty vFile Repository
        vdbStageMountpath: /mnt/provision/samplemount
    spec:
      storageClassName: de-fs-storage
      dataSource:
        name: vdb-snapshot-example
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOncePod
      resources:
        requests:
          storage: 5Gi
  2. Delete a volume snapshot (DCT bookmark)

     kubectl delete -f  volumesnapshot.yaml

Lastly, if there are no volume snapshots or snapshot content objects in Kubernetes:

kubectl delete -f volumesnapshotclass.yaml 

Volume cloning

The Kubernetes clone operation is equivalent to the provisioning of a child VDB. The Delphix K8s driver performs volume cloning by creating a new PVC from an existing PVC. Therefore, when the new PVC is claimed, a new VDB is also created which will be a child of the originally specified VDB. Follow the steps below to create and manage the volume clone. All configuration properties are available on the Helm chart parameters documentation page.

  1. Create a volume clone.

    kubectl apply –f vdbclone.yaml

    Sample vdbclone.yaml:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: vdb-test-clone
      annotations:
        sourceDBName: aks-pg12
        envName: surrogate-host-env
        vdbGroupName: Untitled
        vdbRepositoryName: Empty vFile Repository
        vdbStageMountpath: /mnt/test-vdb
      spec:
        storageClassName: de-fs-storage
        accessModes:
          - ReadWriteOncePod
        resources:
          requests:
            storage: 5Gi
    dataSource:
      kind: PersistentVolumeClaim
      name: test-vdb

    Here test-vdb is a PVC that is already available and will be the clone's source volume.

  2. Use the cloned PVC in the file. This step will enable the VDB and claim the persistent volume created in the prior step. For more information refer to the Provision and Enable section above.

    kubectl apply -f <container-manifest>.yaml

Disable and delete

  • Disable VDB and release PVC: The disable VDB operation is handled alongside the release PVC command. This process is automatically triggered when the PVC is not claimed by a pod or container after creation or after a pod/container which is bound to this PVC, is destroyed. 

    kubectl delete pod <pod-name>
  • Delete VDB and PVC: The delete VDB operation is triggered when the PVC is deleted using the kubectl delete command.

    kubectl delete pvc <pvc-name>

Operators

The Delphix K8s driver is generalizable for any containerized database. Therefore, it is compatible with many database operators, such as the ones listed on OperatorHub.io. To implement, you typically will need to follow the Delphix K8s driver’s prerequisites as defined above, specify your newly created Delphix storage class in the database operator’s Helm chart, and then deploy as documented by the operator. 

Delphix has not certified any specific operator. You are responsible for the support and integration of your chosen database operator. Delphix services are available if you need assistance.

Features

The Delphix K8s Driver enables users to provision and manage the state of their virtual datasets similar to standard Delphix Continuous Data’s virtual databases. The following standard dataset operations are supported today:

Dataset operations

Kubernetes equivalent action

Kubernetes commands

VDB > Provision

Create Persistent Volume Claim (PVC)

kubectl apply -f <pvc-manifest>.yaml

VDB > Destroy

Delete Persistent Volume Claim (PVC)

kubectl delete -f <pvc-manifest>.yaml

VDB > Enable

Request Persistent Volume Claim (PVC)

kubectl apply -f <container-manifest>.yaml

VDB > Disable

Release Persistent Volume Claim (PVC)

kubectl delete -f <container-manifest>.yaml

Bookmark > Create

Create Volume Snapshot

kubectl apply -f <volume-snapshot.yaml>

Bookmark > Delete

Delete Volume Snapshot

kubectl delete -f <volume-snapshot.yaml>

Other dataset operations, such as creating a bookmark, taking a snapshot, or refreshing a VDB, are not supported through the Delphix K8s Driver.

Videos

Support

Delphix K8s driver and plugin support is included within your standard Delphix license agreement. Supported driver versions follow the support policy as outlined in KBA1003. For any questions, bugs, or feature requests, contact us via Delphix Support or the Delphix Community Portal.

The Delphix K8s driver only provides the data, not the database image or binaries. Therefore, you must connect the storage claim with the chosen Docker container. Delphix support cannot provide detailed guidance on how to do so. Follow the example documentation for directions on configuring the solution directly within a Helm chart and through an operator.