Upgrading the Hyperscale Compliance Orchestrator (Podman Compose)

 

Beginning with version 2025.8.0, the Hyperscale database has migrated from SQLite to PostgreSQL, and all existing customer data is migrated as part of this change.

If you are upgrading to this version, monitor the Migration Status API to ensure the migration completes successfully. If the API reports an error, generate a support bundle and submit it to Perforce Customer Support for assistance.

 

Prerequisites

This release does not support using an external PostgreSQL instance as the Hyperscale database.

Prerequisite

Before upgrading, ensure you have downloaded the Hyperscale Compliance x.0.0 (where x.0.0 should be changed to the version of Hyperscale being installed) tar bundle from the Delphix Download website.

How to upgrade the Hyperscale Compliance Orchestrator

Perform the following steps to upgrade the Hyperscale Compliance Orchestrator to the x.0.0 version:

  1. Run cd /<hyperscale_installation_path>/ and podman-compose down to stop and remove all the running containers.

  2. Run the below commands to delete all existing dangling images and Hyperscale images:

    Copy
    podman rmi $(podman images -f "dangling=true" -q)
    podman rmi $(podman images "delphix-hyperscale-masking-proxy" -q)
    podman rmi $(podman images "delphix-controller-service-app" -q)
    podman rmi $(podman images "delphix-masking-service-app" -q)
    podman rmi $(podman images "delphix-*load-service-app" -q)
  3. Remove all files or folders from existing installation directory, except podman-compose.yaml and the .env file. Keep their backup outside the installation directory.

  4. Untar the new tar in your existing installation path. Replace x.0.0 in the path with the version number of the Hyperscale release you are installing: tar -xzvf delphix-hyperscale-masking-x.0.0.tar.gz -C <existing_installation_path>

  5. Copy over the differing bits (essentially the volumes and properties for each service) from the backed-up podman-compose.yaml file to the new podman-compose.yaml file supplied in the bundle.

  6. Similarly, either replace the new .env file with the backed up .env file from step 4 and set the VERSION property as x.0.0 (i.e. VERSION=x.0.0) or use the new .env file from the installation bundle and set any properties in it as set in the old .env file.

  7. Run the below commands to load the images(this will configure Oracle-based unload/load setup):

    Copy
    podman load --input controller-service.tar
    podman load --input unload-service.tar
    podman load --input masking-service.tar
    podman load --input load-service.tar
    podman load --input proxy.tar
    docker load --input database-service.tar
    • If upgrading from an MSSQL connector setup(supported starting 5.0.0.0 release), instead of running the above commands for load/unload services setup(which are for Oracle), run the below commands(rest remains same for the controller, masking, and proxy services):

      Copy
      podman load --input mssql-unload-service.tar 
      podman load --input mssql-load-service.tar
    • If upgrading from a Delimited/Parquet Files connector setup (supported starting 12.0.0 release), instead of running the above commands for load/unload services setup(which are for Oracle), run the below commands(rest remains same for the controller, masking, and proxy services) after updating new image names in podman-compose.yaml:

      Copy
      podman load --input file-connector-unload-service.tar
      podman load --input file-connector-load-service.tar
    • If upgrading from a MongoDB connector setup (supported starting 13.0.0 release), instead of running the above commands for load/unload services setup (which are for Oracle), run the below commands (rest remains same for the controller, masking, and proxy services):

      Copy
      podman load --input mongo-unload-service.tar
      podman load --input mongo-load-service.tar
  8. Make sure to have the below ports configured under proxy service:

    Copy
    ports:
        - "443:8443"
        - "80:8080"
  9. Ensure your mounts are configured and accessible, before running a job.
    If upgrading to version 24.0 (and onwards) , ensure that the location mounted on the Hyperscale host is the same as the one mapped to /etc/hyperscale in your podman-compose.yaml. If a previous mount exists on another location, unmount it and re-mount to the right directory e.g. if a mount point exists with name staging_area at path /mnt/provision/staging_area , execute the following commands and restart the containers.

    1. If NFS file server is used as staging server execute this command:

      Copy
      sudo umount /mnt/provision/staging_area
      sudo mount -t nfs4 <source_nfs_endpoint>:<source_nfs_location> /mnt/provision
    2. If the NFS Server installation is a Delphix Continuous Data Engine empty VDB you can either:

      1. Append the staging_area path to the volume binding for all the services in podman-compose.yaml. For example:

        volumes:
              - /mnt/provision/staging_area:/etc/hyperscale
      2. Alternatively, you can update mount path of Environment on Continuous Data Engine:

        1. Disable the empty VDB(Data Set).

        2. Update path on Environment → Databases page. For example: change path from /mnt/provision/staging_area to mnt/provision/

        3. Enable the empty VDB(Data Set).

        4. Restart Hyperscale containers.

After re-mounting, recheck the permissions of staging area on Hyperscale host. Refer to instruction number 3 on this page: Installation and Setup for required staging area permissions.

Upon application startup, all existing mount-filsystems will be deleted. Please ensure to take backup of the mount setup details, if needed.
If using file connectors, any unload or load jobs in a running state at the time of a container restart are marked as failed.
  1. Run podman-compose up -d to create containers.