Containerized masking installation
This page describes how to utilize Kubernetes images to deploy a containerized version of the Continuous Compliance Engine. Since Continuous Compliance is a tool used for data masking, the term “masking” should be taken as a reference to the primary function of a Continuous Compliance Engine.
With a few small exceptions, containerized masking provides the same functionality and user experience as one would expect with a Continuous Compliance Engine deployed in the typical fashion, on a VM.
Containerized masking is designed to run on any Certified Kubernetes platform, the list of supported platforms can be found at https://www.cncf.io/certification/software-conformance. Containerized masking is also OCI-compliant and may use any container runtime within a Certified Kubernetes platform that implements the OCI Runtime Specification, including CRI-O, Docker, and Podman.
Delphix regularly tests against a range of popular Kubernetes platforms with the goal of covering a representative sample of implementations. The following Kubernetes platforms have been explicitly tested by Delphix and are recommended for use:
-
Microk8s
-
AWS EKS
Obtaining the images
Containerized masking utilizes three integrated containers to deliver, in essence, the same masking experience as provided on a Continuous Compliance Engine deployed on a VM. The containerized form allows for rapid spin up/tear down of ephemeral engines to handle automated workflow deployments. These three containers are delivered in a compressed archive (.tar.gz
) for convenience.
Licensed versions of these bundles are available for download from the download.delphix.com site. In the folder for each version are two files. One file is HTML instructions similar to this page that can be downloaded for an offline copy of the installation instructions. The second file is the masking_docker_images.tar.gz
bundle with the container images.
Docker is employed to build the container images, which produces a set of Open Source (OCI) images for each container. The intention is to make the containers as vendor independent as possible.
Setup
Containerized masking is intended to run as a pod on Kubernetes with three containers:
-
delphix-masking-app
– Serves the application UI and API, and executes masking jobs. -
delphix-masking-database
– Stores various application configurations. -
delphix-masking-proxy
– Serves as a reverse proxy, handling HTTP and HTTPS traffic for the UI and API.
The UI and API are served from internal ports 8080 and 8443. When deploying the application, the Kubernetes config must provide a Service that directs external HTTP traffic to port 8080 and HTTPS traffic to port 8443, as shown in the example kubernetes-config.yaml
file.
The pod also requires a single volume per instance. This storage should be attached to both the app container and the database container.
-
This volume should be attached to the
delphix-masking-database
container at location/var/delphix/postgresql
with a subpath ofpostgresql
. -
This volume should be attached to the
delphix-masking-app
container twice. Once at location/var/delphix/masking/
with a subpath ofmasking
and once at location/var/delphix/postgresql
with a subpath ofpostgresql
.
This volume should have at least 2GB of space for each container, though certain configurations may require significantly more space.
This storage volume should be created as a persistent volume. If it is not, masking job configurations will have to be recreated each time the pod is restarted. Also, certain diagnostic information captured in the logs will be lost when the pod is restarted unless the volume is persistent.
Because this volume is persistent, the pod should be deployed as a StatefulSet.
Network management
The proxy container has built-in configurations to act as a reverse proxy. It is recommended that the main nginx.conf
file remains unmodified; instead, modify the individual component configuration files that get incorporated into the main nginx.conf
file through include statements (such as proxy.conf
for the reverse proxy-related configs and ssl.conf
for HTTPS related configs).
To modify any nginx related files, such as config files or certificates and keys, an external volume should be bind mounted to the proxy container at /etc/config
. During container startup, if the proxy container detects bind mounted files at the locations listed below, it will ignore the config files that are built into the proxy container's image and will instead use the mounted files.
HTTPS certificates
If the proxy container does not detect an external certificate in the expected location, it will generate and use a self-signed certificate.
The expected locations of each file are shown below:
File |
Description |
---|---|
|
main configs file |
|
reverse proxy configs |
|
ssl configs |
|
ssl certificate |
|
ssl private key |
|
DH parameters file |
OWASP CSRFGuard
The OWASP CSRFGuard product has been employed as part of the protections that are built-in to the masking product. The supplied NginX proxy container rewrites a packet's Host header with the contents of the X-Forwarded-Host header if it exists so that CSRFGuard will accept proxied packets.
This results in a requirement. If the Pod is placed behind a proxy device that re-writes the Host header, that proxy must add an X-Forwarded-Host header containing the original host value.
Sample configuration
The following configuration file shows an example of how Containerized masking might be deployed. Details will vary based on the use case, environment, and product version.
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv2 spec: capacity: storage: 500Mi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: nfs-storage2 mountOptions: - hard - nfsvers=4.1 nfs: server: {} path: {} --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc2 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: nfs-storage2 resources: requests: storage: 500Mi --- apiVersion: v1 kind: Service metadata: name: delphix-masking spec: type: NodePort selector: app: masking ports: - name: http port: 8080 nodePort: 30080 - name: https port: 8443 nodePort: 30443 --- apiVersion: v1 kind: Service metadata: name: delphix-masking-debugging spec: type: NodePort selector: app: masking ports: - port: 15213 nodePort: 32213 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: delphix-masking spec: selector: matchLabels: app: masking serviceName: delphix-masking template: metadata: labels: app: masking spec: securityContext: runAsUser: 65436 runAsGroup: 50 fsGroup: 50 # # This is false so that we can run the initContainer as root, for # reasons explained below, but customers should be able to set this to # true in production. # runAsNonRoot: false initContainers: # # Ideally, we would rely on fsGroup to set ownership/permissions in the # mounted volumes such that they can be accessed by the containers. # However, some volume provisioners don't support fsGroup, including the # hostPath provisioner (https://github.com/kubernetes/minikube/issues/1990), # which is the default provisioner for Minikube and Microk8s, which we # use for development and testing. So that we can continue to use these # flavors of k8s, we run an initContainer that we set the ownership and # permissions to be the same as if the volume were created with # provisioner that honored the fsGroup setting. # - image: busybox name: initialize-volumes securityContext: runAsUser: 0 # Volumes are owned by root:root volumeMounts: - name: masking-persistent-storage mountPath: /var/delphix/postgresql subPath: postgresql - name: masking-persistent-storage mountPath: /var/delphix/masking subPath: masking command: - "/bin/sh" - "-c" - "chmod 2775 /var/delphix/masking && chgrp 50 /var/delphix/masking && chmod 2775 /var/delphix/postgresql && chgrp 50 /var/delphix/postgresql" volumes: - name: nfs-pv-storage2 persistentVolumeClaim: claimName: nfs-pvc2 containers: - image: delphix-masking-database:{} imagePullPolicy: Never # This image has to be built locally name: mds ports: - containerPort: 5432 name: mds volumeMounts: - name: masking-persistent-storage mountPath: /var/delphix/postgresql subPath: postgresql - image: delphix-masking-app:{} imagePullPolicy: Never # This image has to be built locally name: app ports: - containerPort: 8284 name: http volumeMounts: - name: masking-persistent-storage mountPath: /var/delphix/masking subPath: masking - name: masking-persistent-storage mountPath: /var/delphix/postgresql subPath: postgresql - name: nfs-pv-storage2 mountPath: /var/delphix/masking/remote-mounts/nfs_2 env: - name: MASK_DEBUG value: "true" - name: proxy image: delphix-masking-proxy:{} imagePullPolicy: Never # This image has to be built locally ports: - containerPort: 8080 name: http - containerPort: 8443 name: https volumeClaimTemplates: - metadata: name: masking-persistent-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 4Gi
Deployment
Load the container images obtained from the download site into some Kubernetes container registry, then deploy the masking Pod using a config file similar to the example provided above.
kubectl apply -f <path-to-config-file>
Debugging
In a support case, a Delphix Support engineer may ask for a support bundle containing diagnostic information. The preferred method of generating a support bundle is to use the API endpoints as shown in the API Call for Generating a Support Bundle document. The API Client Documentation may assist with more information on various uses of the API Client.
Generating and retrieving a support bundle from CLI
If the API endpoints are not functioning properly or there are difficulties accessing them, a support bundle can be gathered by running the following command-line commands from the Kubernetes layer of the node hosting the pod – Kubernetes admin permissions are required to perform these actions.
The exact name of the tarball file created by this command can then be found using kubectl exec
. For example:
$ kubectl exec -it <pod name> -c app -- /bin/bash /opt/delphix/masking/bin/generate_container_support_bundle.sh
$ kubectl exec delphix-masking-0 -c app -- find /var/delphix/masking/ -name 'dlpx-support-*' /var/delphix/masking/dlpx-support-4b3e2af2-1d00-43f5-b45b-c84dba62648a-20211201-18-21-53.tar.gz
The tarball can then be copied out of the pod using kubectl cp
. For example:
$ kubectl cp delphix-masking-0:/var/delphix/masking/dlpx-support-4b3e2af2-1d00-43f5-b45b-c84dba62648a-20211201-18-21-53.tar.gz -c app dlpx-support-4b3e2af2-1d00-43f5-b45b-c84dba62648a-20211201-18-21-53.tar.gz
The tarball can then be provided to the Delphix Support engineer by uploading it to upload.delphix.com and adding the associated case number in the matching field.