Kubernetes

The Delphix Kubernetes (K8s) Driver enables Delphix administrators or application teams to provision and manage virtual data sets into containerized applications that are orchestrated in Kubernetes clusters. The virtual datasets provided by Delphix can endure, despite Kubernetes infrastructure and applications being ephemeral. This allows you to remain in complete control of your testable data without having to worry about rising infrastructure costs or mismanaged data. 

This solution mounts a pre-populated filesystem into a user-specified container pod. Internally, the solution leverages the standard Kubernetes concepts of Persistent Volume (PV) and Persistent Volume Claim (PVC). The Delphix K8s driver which leverages Container Storage Interface (CSI) driver standards, is generalized and thus is capable of supporting any database product that can be launched from a container and expects a single filesystem for its persistent storage tier. Delphix’s virtualization capabilities are integrated tightly into common Kubernetes commands.
kubernetes architecture

You can expect the following benefits after implementing this solution:

  • Reduced infrastructure costs: The Delphix K8s Driver eliminates the need for target environments potentially saving infrastructure costs. In addition, integration with Kubernetes ensures the necessary infrastructure is always “right-sized” without any wasted resources, regardless of the provider.

  • Improved agility: The driver is fully compatible with standard Kubernetes toolsets, such as Helm and kubectl. This means application teams and administrators can easily extend their existing processes and automation. You can quickly gain access to ephemeral, production-like datasets without context switching, manual intervention, and significant overhead.

  • Centralized governance model: All datasets, containerized or not, are managed by the Delphix DevOps Data Platform. IT Administrators do not need to manage multiple toolsets that interface with different cloud and legacy platforms. Dataset access, authorization, and operations are managed within a single platform.

Getting started

Perform the following steps to implement the Delphix K8s Driver.

System requirements

The following components are required to configure the Delphix K8s Driver:

  • Data Control Tower v12.0+

  • Delphix Continuous Data v17.0+

  • Delphix Kubernetes (K8s) driver installed in the K8s cluster

  • Compatible Continuous Data Connector:

    • Oracle – Native engine install

    • PostgreSQL – v4.3.1+

    • K8s vFiles – v1.0+ with the Continous Data Kubernetes (K8s) plugin

  • Surrogate Host Environment

  • Helm v3.10+

  • Container orchestration platform:

Compatibility

The Kubernetes solution is composed of a K8s Driver and a compatible data connector. These two components can be managed separately. Ensure compatible installations are being used by following the below table.

Release K8s Driver K8s vFiles Connector* Oracle Connector (Data Engine Version) PostgreSQL Connector
v1.4.0 1.4.0 1.0.0+ v2025.2 4.3.1+

v1.3.0

1.3.0

1.0.0

N/A 4.3.1+

v1.2.0

1.2.0

1.0.0

N/A 4.3.1+

v1.1.0

1.1.0

1.0.0

N/A N/A

v1.0.0

1.0.0

1.0.0

N/A N/A

* The K8s vFiles Connector operates similarly to the native Unstructured Files (vFiles) connector. However, this separate connector was required due to various K8s requirements.

Prerequisites

kubernetes architecture

  1. Install Data Control Tower (DCT) and Delphix Continuous Data. Ensure DCT is connected to Delphix Continuous Data and syncing properly. For more information, refer to Data Control Tower and Delphix Continuous Data Engine’s documentation.

  2. Install the Kubernetes Driver plugin into Delphix Continuous Data. For more information, refer to the Installation section.

  3. Ingest the source data into Delphix Continuous Data. For more information, refer to the Ingestion architecture section.

  4. Install the Kubernetes driver and create the storage class. For more information, refer to the Installation section.

  • The application deployment environment must have access to the five underlying images of the Delphix K8s Driver (1 image provided by Delphix and 4 sidecar images provided by Kubernetes). These images are made available on internet-hosted repositories.

  • The storage class only has to be created once per Kubernetes cluster and/or application. It can be ignored in subsequent deployments if the same configuration details are expected.

Once the prerequisites have been completed, perform the below steps via kubectl. The first set of steps demonstrates how to provision the virtual database (request the PVC) and then enable it (claim the PVC), while the second demonstrates how to shut it down.

Kubectl

You can also provision from a bookmark using the storageClassName field. See the Bookmark and snapshots section for further details.

Provision and enable (Postgre SQL)

  1. Create a Persistent Volume Claim (PVC).

    kubectl apply -f <pvc-manifest>.yaml
  2. Using the storage class created during installation, the <pvc_manifest.yaml> file specifies the new VDB to be provisioned. In addition to storageClassName, you will need to include the metadata.annotationsfrom General annotations in the PVC manifest.

    apiVersion: v1 
    
    kind: PersistentVolumeClaim 
    
    metadata: 
    
      name: test-pg 
    
      annotations: 
    
        sourceDBName: aks-pg12 
    
        envName: surrogate-host-env 
    
        vdbGroupName: Untitled  
    
        vdbRepositoryName: Empty vFile Repository 
    
        vdbStageMountpath: /mnt/test-vdb 
    
        engineName: myengine.delphix.com 
    
    spec: 
    
      storageClassName: de-fs-storage 
    
      accessModes: 
    
        - ReadWriteOncePod 
    
      resources: 
    
        requests: 
    
          storage: 5Gi 

     

  3. Deploy the containerized database and request the Persistent Volume Claim (PVC).

Copy
kubectl apply -f <container-manifest>.yaml

The <container-manifest.yaml> file will enable the VDB and claim the persistent volume created in the prior step. Therefore, the created volume’s claimName should match the prior steps name value, and the volume should be mounted to the chosen container.

For example:

apiVersion: v1 

kind: ConfigMap 

metadata: 

  name: postgres-config 

  labels: 

    app: postgres 

data: 

  POSTGRES_DB: postgresdb 

  POSTGRES_USER: postgres 

  POSTGRES_PASSWORD: postgres 

  PGDATA: /mnt/mount/data 

--- 

apiVersion: apps/v1 

kind: Deployment 

metadata: 

  name: postgres 

spec: 

  selector: 

    matchLabels: 

      app: postgres 

  template: 

    metadata: 

      labels: 

        app: postgres 

    spec: 

      containers: 

        - name: postgres 

          image: postgres:12.7 

          securityContext: 

              runAsGroup: 999 

              runAsUser: 999 

              privileged: true 

          imagePullPolicy: "IfNotPresent" 

          ports: 

            - containerPort: 5433 

          envFrom: 

            - configMapRef: 

                name: postgres-config 

          volumeMounts: 

            - mountPath: /mnt/mount 

              name: postdb 

      volumes: 

        - name: postdb 

          persistentVolumeClaim: 

            claimName: test-pg 

 
--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: postgres 
  labels: 
    app: postgres 
spec: 
  type: NodePort 
  ports: 
   - port: 5433 
  selector: 
   app: postgres 

Provision and enable (Oracle)

Requirements: A surrogate host (preferably RHEL) with a running Oracle 19c.

  1. Create a Persistent Volume Claim (PVC).

Copy
kubectl apply -f <pvc-manifest>.yaml

The <pvc_manifest.yaml> file specifies the new VDB to provision based on the storage class created during the installation step. Therefore, the necessary properties listed in the General annotations in the PVC manifest and the Oracle annotations in PVC manifest sections need to be added to the manifest and a custom storageClassName

For example:

apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
  name: orapvc 
  annotations: 
    sourceDBName: aks-ora19 
    envName: surrogate-host-env 
    vdbGroupName: Untitled  
    engineName: myengine.delphix.com 

           ownershipSpec: "54321:54321" 

           # Oracle parameters 

    surrEnvUserName: oracle 

    oracleSID: mysid 

    oracleServiceName: "oracle-database" 

    vdbConfigTemplate: "ora-test-template" 
spec: 
  storageClassName: de-fs-storage 
  accessModes: 
    - ReadWriteOncePod 
  resources: 
    requests: 
      storage: 10Gi 
 

Optional: Other VDB values, such as a tag, can be added to the annotations section. See the Helm chart parameters page for a complete list.

For example,

Copy
annotations: 
    ... 
    tags: "exampletag1=examplevalue1, exampletag2= examplevalue2"    ... 
  1. Deploy the containerized database and request the Persistent Volume Claim (PVC).

Copy
kubectl apply -f <container-manifest>.yaml

The <container-manifest.yaml> file will enable the VDB and claim the persistent volume created in the prior step. Therefore, the created volume’s claimName should match the prior step’s name value, and the volume should be mounted to the chosen container.

The following requirements exist for the Oracle container manifest:

  • For Oracle Docker images pulled from the Oracle Container Registry, you will need to provide RDBMS credentials which are used internally by the image. These credentials are added as a Kubernetes secret and are configured once and used within the manifest file as oracle-rdbms-credentials

    Use this command to create the Kubernetes secret:

    Copy
    kubectl create secret generic oracle-rdbms-credentials --from-literal=ORACLE_PWD=<db_password>
  • ORACLE_SID in the container manifest should match the oracleSID in the PVC manifest.

  • The values for runAsUser and fsGroup under spec.securityContext in the manifest yaml should match the ownershipSpec in the PVC manifest. This is used to mount the files using the default user ID for the oracle container

  • Set the mountPath under spec.containers.volumeMounts and the POD_MOUNT_PATH environment variable under spec.containers.env to /mnt/provision/<oracle_sid>_<namespace>. The mountPath is created by the Oracle connector and is hardcoded in the .ini files.

    Note: If the namespace contains hyphens (-), replace them with underscores (_). For instance, hubs-driver becomes hubs_driver.

    For example:

    apiVersion: v1 
    
    kind: ConfigMap 
    
    metadata: 
    
      name: oracle-rdbms-config 
    
      labels: 
    
        app: oracle-rdbms 
    
    data: 
    
      ORACLE_CHARACTERSET: "AL32UTF8" 
    
      ORACLE_EDITION: "enterprise" 
    
      ORACLE_SID: "mysid" 
    
      CONTAINER_DB: "false" 
    
    --- 
    
    apiVersion: apps/v1 
    
    kind: Deployment 
    
    metadata: 
    
      name: oracle-rdbms 
    
      labels: 
    
        app: oracle-rdbms 
    
    spec: 
    
      replicas: 1 
    
      selector: 
    
        matchLabels: 
    
          app: oracle-rdbms 
    
      strategy: 
    
        type: Recreate 
    
      template: 
    
        metadata: 
    
          labels: 
    
            app: oracle-rdbms 
    
        spec: 
    
          securityContext: 
    
            runAsUser: 54321 
    
            fsGroup: 54321 
    
          containers: 
    
            - name: oracle-container 
    
              image: container-registry-mumbai.oracle.com/database/enterprise:19.3.0.0 
    
              command: 
    
                - "/bin/sh" 
    
                - "-c" 
    
                - "export MOUNT_PATH=${POD_MOUNT_PATH}; export DATAFILE_DESTINATION=$MOUNT_PATH; $MOUNT_PATH/script/${ORACLE_SID}/start-k8s-vdb.sh" 
    
              livenessProbe: 
    
                exec: 
    
                  command: 
    
                    - /bin/sh 
    
                    - -c 
    
                    - "MOUNT_PATH=${POD_MOUNT_PATH}; DATAFILE_DESTINATION=$MOUNT_PATH; $MOUNT_PATH/script/${ORACLE_SID}/check-k8s-vdb-status.sh" 
    
                initialDelaySeconds: 120 
    
                periodSeconds: 60 
    
                timeoutSeconds: 10 
    
                failureThreshold: 3 
    
                successThreshold: 1 
    
              readinessProbe: 
    
                exec: 
    
                  command: 
    
                    - /bin/sh 
    
                    - -c 
    
                    - "MOUNT_PATH=${POD_MOUNT_PATH}; DATAFILE_DESTINATION=$MOUNT_PATH; $MOUNT_PATH/script/${ORACLE_SID}/check-k8s-vdb-status.sh" 
    
                initialDelaySeconds: 120 
    
                periodSeconds: 60 
    
                timeoutSeconds: 10 
    
                failureThreshold: 3 
    
                successThreshold: 1 
    
              env: 
    
                - name: POD_MOUNT_PATH  
    
                  value: "/mnt/provision/mysid_hubs_driver" 
    
              envFrom: 
    
                - configMapRef: 
    
                    name: oracle-rdbms-config 
    
                - secretRef: 
    
                    name: oracle-rdbms-credentials 
    
              ports: 
    
                - containerPort: 1521 
    
                  name: oracle-listener 
    
              volumeMounts: 
    
                - name: oradata 
    
                  mountPath: "/mnt/provision/mysid_hubs_driver" 
    
          imagePullSecrets: 
    
            - name: ora-creds 
    
          volumes: 
    
            - name: oradata 
    
              persistentVolumeClaim: 
    
                claimName: orapvc 
    
    --- 
    
    apiVersion: v1 
    
    kind: Service 
    
    metadata: 
    
      name: oracle-database 
    
      labels: 
    
        app: oracle-rdbms 
    
    spec: 
    
      type: NodePort 
    
      ports: 
    
        - name: listener 
    
          port: 1521 
    
      selector: 
    
        app: oracle-rdbms 

  1. Once the containerized database is running, you can attach it to your desired application. That application can live in the same Kubernetes Node, a different Kubernetes Node, or a separate IaaS location.

Bookmark and snapshots

All Kubernetes volume snapshots are equivalent to Data Control Tower bookmarks. When you create or delete a volume snapshot, it creates or deletes an equivalent Data Control Tower bookmark. Follow the below steps to create and manage a snapshot. All configuration properties are available on the Helm chart parameters documentation page.

  1. Create a Volume Snapshot Class

    kubectl apply -f volumesnapshotclass.yaml

    Sample volumesnapshotclass.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
     name: snapshot-storage-class
    driver: defs.csi.delphix.com
    deletionPolicy: Delete
    parameters:
      tagkey1: "examplevalue1"  tagkey2: "examplevalue2"
Key-value pairs are added as DCT bookmark tags instead of the Kubernetes snapshot tags. Every DCT bookmark created from the VolumeSnapshotClass will have these tags. If different tags are required, you must create a new VolumeSnapshotClass.
  1. Create a Volume Snapshot (DCT Bookmark)

    kubectl apply -f volumesnapshot.yaml

    Sample volumesnapshot.yaml

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
     name: vdb-snapshot-example
    spec:
     volumeSnapshotClassName: snapshot-storage-class
     source:
       persistentVolumeClaimName: samplevdbname
Creating a volume snapshot is an asynchronous command. Therefore, the volume snapshot (DCT bookmark) is only available for use once the value of the READYTOUSE field is set true.
  1. Provision from a volume snapshot (DCT bookmark)

    kubectl apply -f pvc_usingsnapshot.yaml

    Sample pvc_usingsnapshot.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: samplevdbnameusingsnapshot
      annotations:
        sourceDBName: dsourcename
        envName: surrogate-host-env
        vdbGroupName: samplegroup
        vdbRepositoryName: Empty vFile Repository
        vdbStageMountpath: /mnt/provision/samplemount
    spec:
      storageClassName: de-fs-storage
      dataSource:
        name: vdb-snapshot-example
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOncePod
      resources:
        requests:
          storage: 5Gi
  2. Delete a volume snapshot (DCT bookmark)

     kubectl delete -f  volumesnapshot.yaml

Lastly, if there are no volume snapshots or snapshot content objects in Kubernetes:

kubectl delete -f volumesnapshotclass.yaml 

Volume cloning

The Kubernetes clone operation is equivalent to the provisioning of a child VDB. The Delphix K8s driver performs volume cloning by creating a new PVC from an existing PVC. Therefore, when the new PVC is claimed, a new VDB is also created which will be a child of the originally specified VDB. Follow the steps below to create and manage the volume clone. All configuration properties are available on the Helm chart parameters documentation page.

  1. Create a volume clone.

    kubectl apply –f vdbclone.yaml

    Sample vdbclone.yaml:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: vdb-test-clone
      annotations:
        sourceDBName: aks-pg12
        envName: surrogate-host-env
        vdbGroupName: Untitled
        vdbRepositoryName: Empty vFile Repository
        vdbStageMountpath: /mnt/test-vdb
      spec:
        storageClassName: de-fs-storage
        accessModes:
          - ReadWriteOncePod
        resources:
          requests:
            storage: 5Gi
    dataSource:
      kind: PersistentVolumeClaim
      name: test-vdb

    Here test-vdb is a PVC that is already available and will be the clone's source volume.

  2. Use the cloned PVC in the file. This step will enable the VDB and claim the persistent volume created in the prior step. For more information refer to the Provision and Enable section above.

    kubectl apply -f <container-manifest>.yaml

Disable and delete

  • Disable VDB and release PVC: The disable VDB operation is handled alongside the release PVC command. This process is automatically triggered when the PVC is not claimed by a pod or container after creation or after a pod/container which is bound to this PVC, is destroyed. 

    kubectl delete pod <pod-name>
  • Delete VDB and PVC: The delete VDB operation is triggered when the PVC is deleted using the kubectl delete command.

    kubectl delete pvc <pvc-name>

Refresh the VDB

The Refresh/Rewind Operation for a Kubernetes VDB is an amalgamated workflow for kubectl commands and manual operations on the VDB resource on Data Control Tower.

  1. Release Persistent Volume Claim (PVC): On the Kubernetes node, run the following command:

    Copy
    kubectl delete -f <container-manifest>.yaml
  2. Go to the DCT UI and manually Refresh/Rewind the VDB.

  3. Request Persistent Volume Claim (PVC) or VDB > Enable : Once the VDB Refresh/Rewind operation completes, go back to the Kubernetes node and run the following command:

    Copy
    kubectl apply -f <container-manifest>.yaml

Note: Alternatively, step 1 and step 3 can be achieved by running the Enable and Disable operations directly on the VDB via the UI or API.

Operators

The Delphix K8s driver is generalizable for any containerized database. Therefore, it is compatible with many database operators, such as the ones listed on OperatorHub.io. To implement, you typically will need to follow the Delphix K8s driver’s prerequisites as defined above, specify your newly created Delphix storage class in the database operator’s Helm chart, and then deploy as documented by the operator. 

Delphix has not certified any specific operator. You are responsible for the support and integration of your chosen database operator. Delphix services are available if you need assistance.

Features

The Delphix K8s Driver enables users to provision and manage the state of their virtual datasets similar to standard Delphix virtual databases. The following standard dataset operations are supported today:

Dataset operations

Kubernetes equivalent action

Kubernetes commands

VDB > Provision

Create Persistent Volume Claim (PVC)

kubectl apply -f <pvc-manifest>.yaml

VDB > Destroy

Delete Persistent Volume Claim (PVC)

kubectl delete -f <pvc-manifest>.yaml

VDB > Enable

Request Persistent Volume Claim (PVC)

kubectl apply -f <container-manifest>.yaml

VDB > Disable

Release Persistent Volume Claim (PVC)

kubectl delete -f <container-manifest>.yaml

VDB > Refresh Release PVC, Refresh VDB, and Request PVC

See VDB > Disable

Run Refresh via UI/API

See VDB > Enable

Bookmark > Create

Create Volume Snapshot

kubectl apply -f <volume-snapshot.yaml>

Bookmark > Delete

Delete Volume Snapshot

kubectl delete -f <volume-snapshot.yaml>

Other dataset operations, such as creating a bookmark, taking a snapshot, or refreshing a VDB, are not supported through the Delphix K8s Driver.

Videos

Support

Delphix K8s driver and plugin support is included within your standard Delphix license agreement. Supported driver versions follow the support policy as outlined in KBA1003. For any questions, bugs, or feature requests, contact us via Delphix Support or the Delphix Community Portal.

The Delphix K8s driver only provides the data, not the database image or binaries. Therefore, you must connect the storage claim with the chosen Docker container. Delphix support cannot provide detailed guidance on how to do so. Follow the example documentation for directions on configuring the solution directly within a Helm chart and through an operator.