Skip to content

pSeven Enterprise deployment

This guide covers the initial deployment and setup of pSeven Enterprise.

Related guides

pSeven Enterprise deployment requires a Kubernetes cluster and depends on several infrastructure services, some of them optional.

flowchart TB
  classDef dashed stroke:#777,stroke-dasharray: 5 5;

  subgraph vendorinfra [pSeven SAS]
    vendorimgregistry[(Docker<br>Registry)]:::dashed
  end

  subgraph hostinfra2 [Infrastructure services]
    postgres[(PostgreSQL<br>database)]
    nfs[(Dedicated<br>NFS<br>file storage)]
    smtp[SMTP<br>server]:::dashed
    ldap[AD/LDAP<br>provider]:::dashed
    hostimgregistry[(Local<br>Docker<br>Registry)]:::dashed
  end

  subgraph kubernetes [Kubernetes cluster]
    pseven[[<b><br>pSeven Enterprise<br><br></b>]]
  end

  subgraph extnodes [Windows nodes]
    extnode[[pSeven Enterprise<br>Windows extension]]:::dashed
  end

  subgraph hostinfra1 [Infrastructure services]
    rproxy[Reverse proxy<br>Load balancer]:::dashed
  end

  rproxy -.-> pseven

  pseven ~~~ extnode
  extnode -.-> pseven

  pseven --> postgres
  pseven --> nfs
  pseven -.-> smtp
  pseven -.-> ldap
  pseven -.-> hostimgregistry

  pseven -.-> vendorimgregistry

Get help with planning your deployment

Fill in this survey to plan your deployment together with the pSeven SAS support team.

Hardware resources summary

Resource Qty Requirements
Kubernetes control plane node 1 4 CPU, 8 GB RAM, 120 GB storage
(1500 IOPS), 1 Gb⁠/⁠s network
Kubernetes worker node 5 8 CPU, 16 GB RAM,
120 GB storage, 1 Gb⁠/⁠s network
PostgreSQL server 1 2 CPU, 4 GB RAM,
50 GB storage, 1 Gb⁠/⁠s network
NFS server 1 2 CPU, 4 GB RAM,
500 GB storage, 1 Gb⁠/⁠s network
(optional) Docker Registry 1 4 CPU, 8 GB RAM,
200 GB storage, 1 Gb⁠/⁠s network
(optional) Reverse proxy 1 2 CPU, 4 GB RAM,
10 GB storage, 1 Gb⁠/⁠s network
(optional) Windows node1 1 4 CPU, 16 GB RAM,
500 GB storage, 1 Gb⁠/⁠s network
  1. Windows nodes are required only if users need to run tasks on Windows. Their hardware requirements are determined by the software, which they will run.
Resource Qty Requirements
Kubernetes control plane node 3 4 CPU, 8 GB RAM, 120 GB storage
(7000/4000 IOPS read/write), 1 Gb⁠/⁠s network
Kubernetes worker node1 5+ 8 CPU, 16 GB RAM,
120 GB storage, 1 Gb⁠/⁠s network
PostgreSQL server 1 4 CPU, 8 GB RAM,
100 GB storage, 1 Gb⁠/⁠s network
NFS server 1 4 CPU, 8 GB RAM,
1+ TB storage, 10 Gb⁠/⁠s network
Docker Registry 1 4 CPU, 8 GB RAM, 200 GB storage
(2400/800 IOPS read/write), 1 Gb⁠/⁠s network
Reverse proxy 1 2 CPU, 4 GB RAM,
10 GB storage, 1 Gb⁠/⁠s network
(optional) Windows node2 1+ 8 CPU, 32 GB RAM,
1 TB storage, 1 Gb⁠/⁠s network
  1. The required number of Kubernetes worker nodes depends strongly on the number of users and expected load. Discuss this requirement with your corresponding pSeven SAS or VAR account manager.
  2. Windows nodes are required only if users need to run tasks on Windows. Their hardware requirements are determined by the software, which they will run.

Requirements

Helm 3.13.0 or newer.
Use compatible versions of Helm and Kubernetes, see Helm Version Support Policy.

Kubernetes 1.28.0 or newer. Updates are tested on Kubernetes 1.28.x.
It is strongly recommended to use a dedicated cluster, which is available exclusively to pSeven Enterprise. Using managed Kubernetes is recommended. If this is not an option, you can set up your own cluster.

Minimum cluster requirements
  • 1 dedicated control plane node: 4 CPU, 8 GB RAM, 120 GB storage (1500 IOPS), 1 Gb/s network.
  • 5 worker nodes: 8 CPU, 16 GB RAM, 120 GB storage, 1 Gb/s network.
  • Clock synchronization (NTP) on all nodes required.

Warning

This deployment becomes unserviceable, if the control plane node is down.

  • 3 dedicated control plane nodes: 4 CPU, 8 GB RAM, 120 GB storage (7000/4000 IOPS read/write), 1 Gb/s network.
  • 5 worker nodes: 8 CPU, 16 GB RAM, 120 GB storage, 1 Gb/s network.
  • Clock synchronization (NTP) on all nodes required.

PostgreSQL 12 or newer.

Database requirements
  • Database server: 2 CPU, 4 GB RAM, 50 GB storage, 1 Gb/s network.
  • Connection pooling - for example, PgBouncer.
  • Clock synchronization (NTP) required.
  • Database server: 4 CPU, 8 GB RAM, 100 GB storage, 1 Gb/s network.
  • Connection pooling - for example, PgBouncer.
  • Clock synchronization (NTP) required.

NFS storage.

NFS requirements
  • NFS server: 2 CPU, 4 GB RAM, 500 GB storage, 1 Gb/s network (10 Gb/s recommended).
  • Storage features: ACL, file locks; disk quota support recommended; using NFSv3 is recommended over NFSv4.
  • Clock synchronization (NTP) required.

Warning

  • Overall performance of pSeven Enterprise is strongly dependent on the NFS server network connection and storage performance. Slow connections and storages bring poor user experience.
  • Many user operations will be slow, if the same server provides storage to other applications (non-dedicated storage).
  • You will not be able to set user and workspace storage quotas, if the server does not support disk quotas.
  • NFSv4 support is experimental. Using NFSv4 might impact performance of pSeven Enterprise.
  • Dedicated NFS server: 4 CPU, 8 GB RAM, 1+ TB storage (10000/5000 IOPS read/write), 10 Gb/s network.
  • Storage features: ACL, file locks, disk quota; using NFSv3 is recommended over NFSv4.
  • Using NFSv3 is recommended unless you are required to use NFSv4. NFSv4 support is experimental, and using it might impact performance of pSeven Enterprise.
  • Must serve a single pSeven Enterprise deployment exclusively (dedicated storage).
  • Clock synchronization (NTP) required.
  • Overall performance of pSeven Enterprise is strongly dependent on the NFS server network connection and storage performance. Faster networks and storages provide better user experience.

(optional) Docker Registry 2.7.1 or newer.

Registry requirements

Warning

User tasks will show delays on launch, and will often stop by timeout because loading their required images over the Internet is too slow.

  • Local Registry: 4 CPU, 8 GB RAM, 200 GB storage (2400/800 IOPS read/write), 1 Gb/s network.
  • Images copied from registry.pseven.io.
  • Clock synchronization (NTP) required.

(optional) SMTP server.

Deployment options
  • You can deploy without an SMTP server.

Warning

pSeven Enterprise will be unable to send password reset links and mail notifications to users.

  • The server must support basic password authentication.
  • Clock synchronization (NTP) required.

(optional) Reverse proxy (NGINX, Traefik).

Reverse proxy requirements
  • You can deploy without a reverse proxy.

Warning

  • User traffic will go unencrypted over HTTP.
  • Higher denial of service probability: connections to pSeven Enterprise may fail, if any of the Kubernetes nodes is down.
  • 2 CPU, 4 GB RAM, 10 GB storage, 1 Gb/s network; depends on the reverse proxy you run.
  • WebSocket support.
  • SSL/TLS termination set up.
  • Load balancing to Kubernetes worker nodes, with health check enabled.
  • Clock synchronization (NTP) required.

(optional) Windows extension nodes.

Deployment options
  • You can deploy without extension nodes.

Warning

Users will be unable to run Windows software in their tasks.

  • A physical Windows 10 workstation or a standalone server. The server must not be an Active Directory domain controller.
  • Windows 10 Pro 64-bit, version 1909 or newer; Windows Server 2019 or newer.
  • Hardware requirements are determined by the software, which the nodes will run. Commonly: 8 CPU, 32 GB RAM, 1 TB storage.
  • Clock synchronization (NTP) required.

It is also possible to deploy without extension nodes and add them later on demand.

Installation package

To get the pSeven Enterprise package, send a request to your corresponding pSeven SAS or VAR account manager. You shall receive the following:

  • The pSeven Enterprise Helm chart, which is a small file named pseven⁠-⁠{YYYY.MM.DD}.tgz, where {YYYY.MM.DD} is the release version. For example: pseven⁠-⁠2023.09.25.tgz.
  • Access credentials for the pSeven SAS Registry, which provides the pSeven Enterprise Docker images to deploy.

The installation package does not include a license key, because each pSeven Enterprise license is bound to a specific Installation ID, which you can only get after the installation.

Deployment overview

This section lists the general stages of a pSeven Enterprise deployment. Details and instructions are provided in the sections that follow.

  1. Prepare the Kubernetes cluster - see Kubernetes cluster.
    • It is strongly recommended to use a dedicated cluster. Also check Configuration requirements.
    • Set up clock synchronization (NTP) on all cluster nodes.
    • Create the pSeven Enterprise installation namespace (pseven⁠-⁠ns).
  2. Install a recent Helm version - see Helm.
  3. Set up the PostgreSQL database server - see PostgreSQL server.
    • Enable connection pooling (PgBouncer).
    • Create the pSeven Enterprise database and user. The user must have full access to that database, including permissions to add and remove tables and schemas.
    • Set up password authentication for the pSeven Enterprise user.
  4. Set up the NFS file storage server - see NFS server.
    • Create and export 3 data storage directories: user data, workspace data, and shared data. The directories must be exported with the no_root_squash flag, and the user with UID 11111 must be set as the owner with full access.
    • Enable disk quota support. If the storage directories are on different disk partitions, enable quotas on the user data and workspace data partitions.
  5. If you plan to set up secure LDAP integration (use LDAPS), add the LDAP server public key certificate to the shared data directory on the NFS server. See LDAPS setup.
  6. Set up the reverse proxy (load balancer, ingress controller) - see Reverse proxy.
    • Check that the proxy properly sets the X-Forwarded-* headers.
    • Enable WebSocket support.
    • Configure SSL/TLS termination.
    • Configure load balancing to Kubernetes worker nodes, enable health checks.
  7. (optional) Set up the SMTP server - see SMTP server.
    • Create the mail account for pSeven Enterprise, set up password authentication.
  8. Set up the local Docker Registry - see Registry setup.
    • Get the full version number from the pSeven Enterprise Helm chart.
    • Copy the pSeven Enterprise images from the pSeven SAS Registry.
    • Create the pSeven Enterprise user and set up password authentication for this user.
  9. Check that your network satisfies the pSeven Enterprise requirements and allows all connections established by pSeven Enterprise. See Network parameters.
  10. Generate the pSeven Enterprise secrets - see Secrets.
    • Get the full version number from the pSeven Enterprise Helm chart.
    • Run the pseven⁠-⁠secretgen image to get the secrets file pseven⁠-⁠secrets.yaml.
    • Backup the secrets file.
  11. Prepare your deployment configuration - see Deployment configuration.
    • Generate the deployment configuration file values.yaml from the pSeven Enterprise Helm chart.
    • Set parameters in values.yaml.
  12. If you are going to run installation as a user who lacks permissions to create cluster-wide objects, create the required Kubernetes objects before installation and specify them in values.yaml - see Installing with limited cluster permissions. Otherwise you can skip this step to let Helm create the required objects during installation.
  13. Install the pSeven Enterprise Helm chart - see Installation.
    • Use the prepared values.yaml and pseven⁠-⁠secrets.yaml files.
    • Install to the pSeven Enterprise namespace (pseven⁠-⁠ns) as a named release (pseven⁠-⁠rl), save the installation log.
    • Monitor the installation progress for errors and warnings.
    • Run the post-install commands output by Helm to the log.
  14. Secure the built-in admin account - see Admin access.
  15. Set up the license - see License.
    • Get your Installation ID (generated during deployment).
    • Request a license for your Installation ID.
    • Add the license to your deployment.
  16. Check user sign-in - see User access.
  17. Check the storage quotas - see User storage quotas.
    • Check that the storage quota management is enabled.
    • Set the default user storage quota.
  18. Perform basic tests on your deployment - see Verify installation.
  19. Set up the Windows extension nodes (optional) - see Extension nodes and the Extension node deployment guide.
    • Get the extension node setup package from pSeven Enterprise and install.
    • Enable clock synchronization (Internet Time) on every extension node.
    • Test each node.
  20. If your deployment should support a public collection of apps, create a public workspace before you add users - see Workspaces.
  21. Continue with accounts setup.

Kubernetes cluster

A production deployment of pSeven Enterprise requires a dedicated fault tolerant production grade Kubernetes cluster with high availability. You can use one of the many managed Kubernetes solutions (see Turnkey Cloud Solutions and Kubernetes Partners), or build your own production Kubernetes cluster.

Compatible versions

pSeven Enterprise is compatible with Kubernetes versions 1.22.0 through 1.25.2. Newer versions are compatible but were never tested. Older versions are not supported.

Configuration requirements

It is strongly recommended to deploy pSeven Enterprise on a dedicated Kubernetes cluster - that is, the cluster should not run any other application workloads.

If you decide to use a new managed Kubernetes cluster, opt for the regional master configuration with load balancer enabled for the control plane nodes.

If you decide to deploy your own cluster, on premises or in the cloud, observe the following recommendations:

  • Set up your cluster following the guidelines found in the Kubernetes documentation, sections Production environment and Best practices.
  • If you use Docker Engine as container runtime, make sure its version is 20.10.10 or newer. With older versions, pSeven Enterprise installation fails because they do not support the clone3 syscall (see changes in 20.10.10).
  • If you install Kubernetes version 1.26 or later, note that you must use a container runtime interface that supports CRI API v1 - see CRI version support.
  • Configure the node filesystem to keep the /var directory on the system root filesystem, as described in Local ephemeral storage.
  • Make sure that the node disk partition mounted to / (root) has at least 50 GB of free disk space.

Clock synchronization

pSeven Enterprise requires clock synchronization for all Kubernetes nodes. Whichever cluster solution you use, make sure that an NTP client is installed and enabled on every cluster node.

Installation namespace

pSeven Enterprise should be deployed in a dedicated namespace (see Namespaces). Provide a new namespace to use for the pSeven Enterprise deployment. This guide and other pSeven Enterprise administration guides assume that the pSeven Enterprise installation namespace is called pseven⁠-⁠ns. If you choose another namespace, replace pseven⁠-⁠ns in example commands in the guides with the namespace you actually use.

Cluster-wide objects

pSeven Enterprise deployment requires creating several Kubernetes objects, some of them cluster-wide: ClusterRole and ClusterRoleBinding, PriorityClass, PersistentVolume.

If you install as a cluster admin who has cluster-wide permissions, the required objects are created automatically. Otherwise if you are going to install as a user with limited permissions, you have to create the objects manually before installation. That is detailed in Installing with limited cluster permissions.

Helm

Helm is the package manager for Kubernetes.

It is recommended to use a recent release of Helm - see Installing Helm. Select the Helm version compatible with your Kubernetes version - see Helm Version Support Policy.

The least version of Helm compatible with pSeven Enterprise is Helm 3.10.0 (supports Kubernetes 1.22.x to 1.25.x). Earlier versions are not compatible and must be updated.

PostgreSQL server

pSeven Enterprise requires PostgreSQL version 12 or greater. Earlier versions are not compatible. You can use a managed PostgreSQL instance from your cloud service provider or a PostgreSQL server that exists in your organization.

  • Set up the database connection pooling (PgBouncer).
  • Create a new database and a new database user for pSeven Enterprise.
  • Set up password authentication for the pSeven Enterprise database user - see Client Authentication.
  • Verify that the server allows connecting to the database with the above username and password.
  • Verify that the user you have created for pSeven Enterprise has full access to the pSeven Enterprise database, including permissions to add and remove tables and schemas.

Connection pooling

pSeven Enterprise requires connection pooling on the PostgreSQL server in order to avoid issues caused by the PostgreSQL connection limits.

The recommended connection pooling tool for pSeven Enterprise is PgBouncer, which serves as a proxy between the PostgreSQL server and database clients. PgBouncer can be installed on the PostgreSQL server host or on a separate server.

  • In Ubuntu, Debian distributions: apt-get install pgbouncer.
  • In RedHat, CentOS distributions: yum install pgbouncer.
  • Also available from the Postgres package repositories (APT, YUM), which may provide a more recent version than your distribution repository.

For setup instructions, see Usage in the PgBouncer documentation.

NFS server

pSeven Enterprise requires an NFS server to store user files, workspace files and apps, and system data.

  • This server must be used exclusively by a single pSeven Enterprise deployment. It must not serve any other applications or serve multiple pSeven Enterprise instances.
  • The server storage must support Access Control Lists (ACL).
  • The server storage must support file locking (the flock() syscall).
  • Using NFSv3 is recommended, unless your organization policies require NFSv4. NFS servers normally enable both NFSv3 and NFSv4 by default. Both are compatible with pSeven Enterprise, however NFSv4 support is experimental, and using it might impact performance. Older NFS versions are not supported.
  • The server should designate 3 data storage directories to be used by pSeven Enterprise: user data, workspace data, and shared data. The data storage directories must be exported with the no_root_squash flag and must be owned by the user with UID 11111 who has full access. See Data storage directories.
  • Disk quota support is required to enable storage quotas in pSeven Enterprise. See Enabling storage quota for an example.
Data segregation

The data storage directories are independent. If required, you can create them on different servers and connect those servers to pSeven Enterprise.

Further sections consider nfs-kernel-server in Ubuntu Server to provide an example of configuring an NFS server for pSeven Enterprise. You can enable nfs-kernel-server as follows:

sudo apt install nfs-kernel-server
sudo systemctl enable rpc-statd
sudo systemctl start rpc-statd
sudo systemctl enable --now nfs-server

Note that nfs-server requires rpc-statd but does not enable it automatically. Check that rpc-statd is running and check the NFS server status:

sudo systemctl status rpc-statd
ps -e | grep rpc.statd
sudo systemctl status nfs-kernel-server

Data storage directories

pSeven Enterprise uses 3 data storages:

  • User data storage ({user data}) - contains user home directories.
  • Workspace data storage ({app data}) - contains workspace files and apps published to AppsHub.
  • Shared data storage ({shared data}) - contains system data.

Each storage must be a separate NFS share. For simplicity, the examples below assume that you create all those shares on the same NFS server, although this is not required.

  1. Create directories and configure access. pSeven Enterprise connects to NFS shares as the system user with UID 11111. This user must be the shared directory owner with full access.

    sudo mkdir -p /var/pSeven  # Create the parent directory for file storages.
    cd /var/pSeven
    sudo mkdir UserData WorkspaceData SharedData  # Create file storage directories.
    sudo chown 11111 UserData WorkspaceData SharedData  # Set the pSeven Enterprise system user as the owner.
    sudo chmod u+rwx UserData WorkspaceData SharedData  # Set full access rights for that user.
    
  2. Export directories:

    echo -e "\n/var/pSeven/UserData *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)" | sudo tee -a /etc/exports > /dev/null
    echo -e "\n/var/pSeven/WorkspaceData *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)" | sudo tee -a /etc/exports > /dev/null
    echo -e "\n/var/pSeven/SharedData *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)" | sudo tee -a /etc/exports > /dev/null
    sudo exportfs -av
    

    The exportfs -av command should output a list of exported directories similar to the following:

    exporting *:/var/pSeven/SharedData
    exporting *:/var/pSeven/WorkspaceData
    exporting *:/var/pSeven/UserData
    
  3. Apply the NFS configuration:

    sudo systemctl restart nfs-kernel-server
    
  4. Check the NFS server status:

    sudo systemctl status nfs-kernel-server
    

Enabling storage quota

pSeven Enterprise supports storage quotas and enables them automatically, if you enable the disk quota support on the NFS server, as described in this section.

  1. Disk quotas are configured per partition. Identify the partition that hosts the user data and workspace data storages, and its mount point (further denoted {mnt}). If the storages are located on different partitions or NFS servers, configure each of them as described in this and further sections.
  2. Install the disk quota management tools package on the server - for example, in Ubuntu Server:

    sudo apt-get install quota
    
  3. Open /etc/fstab and add the usrquota,grpquota flags to the {mnt} mount point options. Save your edits, then re-mount the filesystem:

    sudo mount -o remount {mnt}
    
  4. Enable disk quotas:

    sudo quotacheck -ugm {mnt}
    sudo quotaon -v {mnt}
    

    If the sudo quotaon command returns an error message indicating that the system kernel does not support storage quotas, you should check and install the required kernel modules if necessary (see Checking for the kernel modules).

  5. pSeven Enterprise requires a remote quota server (rpc-rquotad) running on the file storage host. The quota server must allow the remote setting of quotas (run with the --setquota or -S option).

    • Add rpc-rquotad to startup, allow the remote setting of quotas. Refer to your Linux distribution documentation for details on configuring system startup. You can find an example for Ubuntu Server in Enabling the remote quota service.
    • Reboot the file storage server and verify that the remote quota server has started automatically.

    For testing purposes, you can also start the remote quota server manually with sudo rpc.rquotad -S.

Checking for the kernel modules

To enable disk quotas, the appropriate operating system kernel modules must be installed and loaded. You can check for them using the following command:

lsmod | grep -i quota

This command should output the following list, indicating that the required modules are in place:

quota_v1               16384  0
quota_v2               16384  2

Otherwise, you must install these modules using the following command:

apt install linux-image-extra-virtual

If you use Amazon Web Services (AWS), you must additionally install the linux-modules-extra-aws package using the following command:

apt install linux-modules-extra-aws

Use the following commands to load and check for the required modules:

modprobe quota_v1
modprobe quota_v2
lsmod | grep -i quota

If the quota_v1 and quota_v2 modules are still missing from the lsmod output, contact Technical support for further guidance. If these modules are in the list, run the following commands to ensure they are autoloaded:

echo quota_v1 >> /etc/modules
echo quota_v2 >> /etc/modules

Finally, enable disk quotas ({mnt} is the storage partition mount point):

sudo quotaon -v {mnt}

Enabling the remote quota service

The rpc-rquotad service must be configured so that it starts automatically and allows the setting of quotas. For example, on Ubuntu Server you can configure this service as follows:

  1. Create /etc/systemd/system/rpc-rquotad.service and add the following settings there:

    rpc-rquotad.service
    [Unit]
    Description=Quota remote RPC service
    After=network.target
    StartLimitIntervalSec=0
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/sbin/rpc.rquotad -S -F
    
    [Install]
    WantedBy=multi-user.target
    
  2. Start the rpc-rquotad service:

    systemctl start rpc-rquotad
    systemctl daemon-reload
    
  3. Enable automatic start of the rpc-rquotad service:

    systemctl enable rpc-rquotad
    
  4. Finally, reboot the file storage server and verify that the remote quota service has started automatically.

LDAPS setup

pSeven Enterprise supports AD DS and LDAP integration and the secure LDAPS protocol (LDAP over SSL/TLS). The use of LDAPS requires adding the LDAP server public key to the pSeven Enterprise shared data storage before installation.

  1. Get the LDAPS public key certificate that is used in your organization. The certificate must be exported to a DER or Base-64 encoded X.509 file. Request the certificate file from your LDAP services admin, or get it with a command like the following:

    openssl s_client -connect {LDAP server address}:{port number} < /dev/null > ldaps.cert
    
  2. On the NFS server, in the directory you have designated for the pSeven Enterprise shared data storage, create the subdirectory named certs.

  3. Copy the certificate file to certs. The file must be named ldaps.cert as in the example above. The resulting path to the certificate must be {shared data}/certs/ldaps.cert, where {shared data} is the pSeven Enterprise shared data storage directory path.

pSeven Enterprise will import the LDAP server public key from {shared data}/certs/ldaps.cert during installation. The rest of the LDAPS setup is done after the installation.

Reverse proxy

pSeven Enterprise should be placed behind a reverse proxy to secure the deployment and enable load balancing - for example, NGINX or Traefik.

Before you deploy pSeven Enterprise, ensure that:

  • Your reverse proxy properly sets the X-Forwarded-* headers for the forwarded requests.
  • The proxy supports the WebSocket protocol.
  • SSL/TLS termination is enabled (see examples: NGINX, Traefik).
  • The proxy is configured to load balance the user traffic to Kubernetes nodes.
  • The proxy sends health check requests to Kubernetes nodes (see examples: NGINX, Traefik).

See also Reverse proxy configuration notes.

SMTP server

pSeven Enterprise can use an SMTP server to send emails to users - for example, notifications and password reset links. You can also deploy without an SMTP server; those features will then be unavailable.

  • Create a new mail account for pSeven Enterprise.
  • Set up password authentication for that account.

Registry setup

Running pSeven Enterprise requires a Docker Registry, which stores the pSeven Enterprise Docker images. The Registry is also required during the installation.

It is recommended to set up a local Registry in the same network with pSeven Enterprise and copy images from the pSeven SAS Registry (registry.pseven.io) before the installation. You can use a managed Registry instance from your cloud service provider or a Registry that exists in your organization.

  • If you are deploying a new Registry, see Deploy a registry server for instructions.
  • pSeven Enterprise requires a username and a password to connect to the Registry. Create the pSeven Enterprise user and set up password authentication - see Restricting access.

To prepare for the pSeven Enterprise installation, copy its images from the pSeven SAS Registry to your local Registry:

  1. Log in to the pSeven SAS Registry and the local Registry.

    docker login registry.pseven.io  # Log in to pSeven SAS Registry
    docker login {registry address}  # Log in to your local Registry
    
  2. Get the list of image names from the pSeven Enterprise Helm chart:

    helm show values pseven-{YYYY.MM.DD}.tgz | grep -oE "registry\.pseven\.io\/[^:]+"
    
  3. Get the full version number from the pSeven Enterprise Helm chart:

    helm show chart pseven-{YYYY.MM.DD}.tgz | grep appVersion | cut -d" " -f2
    
  4. Copy each image to your local Registry, specifying the full version number. For example (replace {registry} with your Registry address, {version} with the full version number):

    docker pull registry.pseven.io/pseven-app:{version}
    docker tag registry.pseven.io/pseven-app:{version} {registry}/pseven/pseven-app:{version}
    docker push {registry}/pseven/pseven-app:{version}
    
  5. Remove images from the local cache, for example:

    docker image rm registry.pseven.io/pseven-app:{version}
    docker image rm {registry}/pseven/pseven-app:{version}
    

Network parameters

All nodes used by pSeven Enterprise and all infrastructure service hosts must support 1 Gb/s connection speed, and the NFS server specifically must support 10 Gb/s. Network connections with high latency or less bandwidth often cause performance loss and system failures.

Your network configuration must allow all connections established by pSeven Enterprise. See the diagram below for an overview and the full details in the table that follows.

flowchart TB
  classDef dashed stroke:#777,stroke-dasharray: 5 5;

  subgraph vendorinfra [pSeven SAS]
    vendorimgregistry[(Docker<br>Registry)]:::dashed
  end

  subgraph hostinfra2 [Infrastructure services]
    hostimgregistry[(Local<br>Docker<br>Registry)]:::dashed
    ldap[AD/LDAP<br>provider]:::dashed
    smtp[SMTP<br>server]:::dashed
    nfs[(Dedicated<br>NFS<br>file storage)]
    postgres[(PostgreSQL<br>database)]
  end

  subgraph extnodes [Windows nodes]
    extnode[[pSeven Enterprise<br>Windows extension]]:::dashed
  end

  subgraph kubernetes [Kubernetes cluster]
    kubenodes[[<br><b>pSeven Enterprise</b><br>worker nodes<br><br>]]
  end

  subgraph hostinfra1 [Infrastructure services]
    rproxy[Reverse proxy<br>Load balancer]
  end

  subgraph clients [Clients]
    webui(("&nbsp;&nbsp;Web UI&nbsp;&nbsp;<br>user"))
    restapi((REST API<br>client))
    admin((Deployment<br>admin))
  end

  kubenodes --"25*/tcp:SMTP<br>465*/tcp:SMTPS<br>587*/tcp:SMTPS"---> smtp
  kubenodes --"389/tcp:LDAP<br>636/tcp:LDAPS"---> ldap
  kubenodes --"5432*/tcp+udp:PostgreSQL"---> postgres
  kubenodes --"111/tcp+udp:NFS<br>1110/tcp+udp:NFS<br>2049/tcp+udp:NFS<br>4045/tcp+udp:NFS"---> nfs
  kubenodes --"443/tcp:HTTPS"---> hostimgregistry
  kubenodes -."443/tcp:HTTPS"..-> vendorimgregistry

  webui --"443/tcp:HTTPS"--> rproxy
  restapi --"443/tcp:HTTPS"--> rproxy
  rproxy --"30080*/tcp:HTTP<br>30001*/tcp:HTTP"--> kubenodes

  admin --"6443/tcp:Kubernetes API"--> kubernetes
  admin --"3389/tcp:RDP"--> extnode

  extnode --"31194*/tcp+udp:OpenVPN"--> kubenodes

  admin --"5432/tcp+udp:PostgreSQL"--> postgres
  admin --"111/tcp+udp:NFS<br>1110/tcp+udp:NFS<br>2049/tcp+udp:NFS<br>4045/tcp+udp:NFS"--> nfs
Notes
  1. The AD/LDAP provider is optional. pSeven Enterprise supports the AD DS and LDAP integration but does not require it.
  2. The SMTP server is optional. You can skip the related installation steps. If you deploy without an SMTP server, pSeven Enterprise will be unable to send emails to users - for example, notifications or password reset links.
  3. Windows extension nodes are optional. You can disable the support for Windows extension nodes before the pSeven Enterprise installation. You can also keep it enabled but do not deploy any Windows nodes, and add them later on demand.
  4. Also before the installation, you can change the following ports used by pSeven Enterprise (marked * in the diagram):
    • The ports for inbound connections (defaults are 30080 and 30001).
    • The OpenVPN connection port (default is 31194). This port is used only if there are Windows extension nodes connected to pSeven Enterprise.
    • The PostgreSQL database connection port. There is no default, 5432 is the commonly used port.
    • The SMTP server connection port. There is no default, port number depends on your SMTP server type.

The table below contains full details about the pSeven Enterprise network parameters. Note that certain ports are configurable, and certain components run conditionally - see notes following the table.

Component Listen ports/protocols Outbound connections
appserver 10080/tcp:HTTP To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
keycloak: 10080/tcp:HTTP
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
Mail server: 25/tcp:SMTP
Mail server: 465,587/tcp:SMTPS
File storage: 111,1110,2049,4045/tcp+udp:NFS
appshubui 10080/tcp:HTTP none
appworkerappjobs none To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
appworkerblockjobs 10000-65535/tcp:HTTP To pSeven Enterprise components:
appserver: 10080/tcp:HTTP
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
appworkerfilesystemjobs none To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
appworkerworkflowjobs none To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
blockfilesgate 139/tcp:NetBIOS
445/tcp:SMB
To infrastructure components:
File storage: 111,1110,2049,4045/tcp+udp:NFS
blocknetworkgate 31194/tcp+udp:OpenVPN To pSeven Enterprise components:
appserver: 10080/tcp:HTTP
blockfilesgate: 139/tcp:NetBIOS
blockfilesgate: 445/tcp:SMB
blocksshgate: 11122/tcp:SSH
blocksshgate: 10000-65535/tcp:HTTP
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
blockrouter 10080/tcp:HTTP To pSeven Enterprise components:
appworkerblockjobs: 10000-65535/tcp:HTTP
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
blocksshgate: 11122/tcp:SSH
blocksshgate: 10000-65535/tcp:HTTP
redis: 16379/tcp:Redis
blocksshgate 11122/tcp:SSH
10000-65535/tcp:HTTP
none
dataeditorui 10080/tcp:HTTP none
edgerouter 30080/tcp:HTTP
30001/tcp:HTTP
To pSeven Enterprise components:
appserver: 10080/tcp:HTTP
blockrouter: 10080/tcp:HTTP
keycloak: 10080/tcp:HTTP
files: 10080/tcp:HTTP
psevenui: 10080/tcp:HTTP
appshubui: 10080/tcp:HTTP
dataeditorui: 10080/tcp:HTTP
valueeditorui: 10080/tcp:HTTP
files 10080/tcp:HTTP To infrastructure components:
File storage: 111,1110,2049,4045/tcp+udp:NFS
htcondorexecute 19618/tcp:Condor
10000-65535/tcp:HTTP
To pSeven Enterprise components:
appserver: 10080/tcp:HTTP
blocksshgate: 10000-65535/tcp:HTTP
blocksshgate: 11122/tcp:SSH
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
redis: 16379/tcp:Redis

To infrastructure components:
File storage: 111,1110,2049,4045/tcp+udp:NFS
htcondormanager 19618/tcp:Condor To pSeven Enterprise components:
htcondorsubmit: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP
htcondorsubmit 19618/tcp:Condor To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorexecute: 19618/tcp:Condor
htcondorexecute: 10000-65535/tcp:HTTP

To infrastructure components:
File storage: 111,1110,2049,4045/tcp+udp:NFS
keycloak 10080/tcp:HTTP To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
AD/LDAP provider: 389/tcp:LDAP
AD/LDAP provider: 636/tcp:LDAPS
launcher 10080/tcp:HTTP To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
memcached 11211/tcp:Memcached none
post1_makeextnodepkgs none File storage: 111,1110,2049,4045/tcp+udp:NFS
post1_migratedb none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
post2_cleanupruns none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
post2_fixstorages none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
post2_scanapps none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
post3_moveuserstokeycloak none To pSeven Enterprise components:
keycloak: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
post3_tweakkeycloakconfig none To pSeven Enterprise components:
keycloak: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
post3_upgradeapps none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
psevenui 10080/tcp:HTTP none
redis 16379/tcp:Redis none
upkeepkblockeditsessions none To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
upkeeplogs none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
upkeepquotas none To pSeven Enterprise components:
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
upkeepruns none To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
upkeepwaitingruns none To pSeven Enterprise components:
htcondormanager: 19618/tcp:Condor
htcondorsubmit: 19618/tcp:Condor
launcher: 10080/tcp:HTTP
memcached: 11211/tcp:Memcached
redis: 16379/tcp:Redis

To infrastructure components:
PostgreSQL database: 5432/tcp+udp:PostgreSQL
File storage: 111,1110,2049,4045/tcp+udp:NFS
valueeditorui 10080/tcp:HTTP none
Notes
  1. The PostgreSQL database connection port can be configured while you prepare for the installation. 5432 is the commonly used port.
  2. The actually used SMTP server connection port depends on your SMTP server type and can be configured while you prepare for the installation.
  3. The blocknetworkgate listen port for OpenVPN connections can be configured while you prepare for the installation (31194 is the default). If you disable the support for Windows extension nodes, this port is not used.
  4. The edgerouter listen ports 30080 (user client connections) and 30001 (health check requests) can be configured while you prepare for the installation.
  5. blockfilesgate, blocknetworkgate, and blocksshgate do not run if you disable the support for Windows extension nodes.
  6. htcondorexecute runs only if you disable running the pSeven Enterprise user tasks (blocks) on Kubernetes natively.
  7. htcondormanager and htcondorsubmit do not run if you:
    • disable the support for Windows extension nodes, and
    • enable running the pSeven Enterprise user tasks (blocks) on Kubernetes natively.
  8. The post* components are Jobs, which run once during the installation.
  9. The post1_makeextnodepkgs Job does not run if you disable the support for Windows extension nodes.
  10. The post2_fixstorages Job runs only if you enable the file storage check when upgrading pSeven Enterprise (install with the -⁠-⁠set fixStorages=true option) - see Upgrading pSeven Enterprise.
  11. The upkeep* components are CronJobs, which run on schedule.

Secrets

The initial deployment of pSeven Enterprise requires a set of secrets - encryption keys and certificates. To generate these secrets:

  1. Log in to the Docker Registry.

    docker login {registry address}  # registry.pseven.io or your local Registry address
    
  2. Get the full version number from the pSeven Enterprise Helm chart:

    helm show chart pseven-{YYYY.MM.DD}.tgz | grep appVersion | cut -d" " -f2
    
  3. Generate the secrets file (replace {registry} with the Registry address, {version} with the full version number):

    docker run --rm {registry}/pseven-secretgen:{version} > pseven-secrets.yaml
    

The secrets file pseven⁠-⁠secrets.yaml will be required to start the installation. Also keep a backup copy of the secrets file after completing with the deployment.

Deployment configuration

To install pSeven Enterprise, you have to prepare the deployment configuration file values.yaml. Generate this file from the pSeven Enterprise Helm chart:

helm show values pseven-{YYYY.MM.DD}.tgz > values.yaml

Open values.yaml and specify the parameters listed below. See the comments in values.yaml for detailed parameter descriptions and examples.

  • Replace "*" in entrypoint.allowedHosts with your reverse proxy address.
  • Set entrypoint.usingTrustedReverseProxy:
    • true to trust the X-Forwarded-* request headers. This setting requires that all connections to pSeven Enterprise come through a trusted reverse proxy.
    • false for all other cases.
  • Replace the pSeven SAS Registry address in registry.url with your local Registry address.
  • Set registry.username and registry.password.
  • Set all storage.* parameters according to your NFS file storage configuration. Prefer NFSv3 over NFSv4 when choosing the NFS protocol version to use (the storage.*.nfs.version parameters).
  • Set all postgres.* parameters according to your PostgreSQL database server configuration.
  • If you are providing an SMTP server, set all smtp.* parameters according to its configuration.
  • The sizing.htcondorexecuteReplicas and sizing.htcondorexecuteResources.requests.memory parameters should be set to provide maximum performance but avoid running two or more task executor containers on the same worker node. This is commonly achieved with the following:
    • Set the number of executors - sizing.htcondorexecuteReplicas equal to the number of the worker nodes in your cluster.
    • Set sizing.htcondorexecuteResources.requests.memory equal to half the amount of worker node RAM. For example, if the worker nodes have 16 GB RAM, set "8192Mi" (8 GB).
    • Set sizing.htcondorexecuteResources.requests.cpu equal to half the performance of worker node CPUs. For example, if the worker nodes have 8 CPU cores, set "4000m" (4 cores; the value is in millicores).
  • In the image.* parameters, replace the pSeven SAS Registry address (registry.pseven.io) with your local Registry address.

Other parameters in values.yaml, which are not mentioned above, are set to sensible defaults. You can change them according to your requirements.

Installing with limited cluster permissions

If you have sufficient permissions, you can skip to the installation

If you run helm install as a user who has the permissions to create cluster-wide objects, Helm can automate the setup steps described in this section. In such case, if you accept the default settings, you can skip this section and continue to installation.

pSeven Enterprise deployment requires creating several Kubernetes objects, some of them cluster-wide: ClusterRole and ClusterRoleBinding, a few PriorityClass objects, a PersistentVolume for each of the data storages. (1)

  1. ClusterRole, ClusterRoleBinding, and priority classes are only required if you enable running blocks natively on Kubernetes (set sizing.runOnKubernetes to true in values.yaml).

The additional pre-install setup described below is required, if you are going to install pSeven Enterprise as a user who has limited permissions on the deployment cluster - for example, a sub-admin of a shared cluster who cannot make cluster-wide changes.

All kubectl commands in this section require cluster-wide permissions. Contact the cluster administrator as needed.

  1. You have to set up a PersistentVolume and PersistentVolumeClaim for each of the pSeven Enterprise data storages. Copy the template below to a file named volumes.yaml and edit it according to the comments. (1)

    1. The PersistentVolume and PersistentVolumeClaim objects you are about to create are only valid for your current data storage configuration. You will have to redo the storage-related steps, if you later apply changes to your NFS server configuration or any of the storage.* parameters in values.yaml.
    volumes.yaml template
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      # Uncomment if the user data storage server connection uses NFSv3.
      # name: pseven-ns-userdata-nfs3-pv
      # Uncomment if the user data storage server connection uses NFSv4.
      # name: pseven-ns-userdata-nfs4-pv
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      nfs:
        # Replace {user data server hostname} with the value of the
        # `storage.userdata.nfs.server` parameter from your values.yaml.
        server: "{user data server hostname}"
        # Replace {user data storage path} with the value of the
        # `storage.userdata.nfs.path` parameter from your values.yaml.
        path: "{user data storage path}"
      mountOptions:
        # Set the same version here and in `storage.userdata.nfs.version`
        # in your values.yaml.
        - nfsvers=3
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      # Uncomment if the workspace data storage server connection uses NFSv3.
      # name: pseven-ns-workspacedata-nfs3-pv
      # Uncomment if the workspace data storage server connection uses NFSv4.
      # name: pseven-ns-workspacedata-nfs4-pv
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      nfs:
        # Replace {workspace data server hostname} with the value of the
        # `storage.workspacedata.nfs.server` parameter from your values.yaml.
        server: "{workspace data server hostname}"
        # Replace {workspace data storage path} with the value of the
        # `storage.workspacedata.nfs.path` parameter from your values.yaml.
        path: "{workspace data storage path}"
      mountOptions:
        # Set the same version here and in `storage.workspacedata.nfs.version`
        # in your values.yaml.
        - nfsvers=3
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      # Uncomment if the shared data storage server connection uses NFSv3.
      # name: pseven-ns-shareddata-nfs3-pv
      # Uncomment if the shared data storage server connection uses NFSv4.
      # name: pseven-ns-shareddata-nfs4-pv
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      nfs:
        # Replace {shared data server hostname} with the value of the
        # `storage.shareddata.nfs.server` parameter from your values.yaml.
        server: "{shared data server hostname}"
        # Replace {shared data storage path} with the value of the
        # `storage.shareddata.nfs.path` parameter from your values.yaml.
        path: "{shared data storage path}"
      mountOptions:
        # Set the same version here and in `storage.shareddata.nfs.version`
        # in your values.yaml.
        - nfsvers=3
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      # Uncomment if the user data storage server connection uses NFSv3.
      # name: pseven-ns-userdata-nfs3-pvc
      # Uncomment if the user data storage server connection uses NFSv4.
      # name: pseven-ns-userdata-nfs4-pvc
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      resources:
        requests:
          storage: 1Gi
      # Uncomment if the user data storage server connection uses NFSv3.
      # volumeName: pseven-ns-userdata-nfs3-pv
      # Uncomment if the user data storage server connection uses NFSv4.
      # volumeName: pseven-ns-userdata-nfs4-pv
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      # Uncomment if the workspace data storage server connection uses NFSv3.
      # name: pseven-ns-workspacedata-nfs3-pvc
      # Uncomment if the workspace data storage server connection uses NFSv4.
      # name: pseven-ns-workspacedata-nfs4-pvc
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      resources:
        requests:
          storage: 1Gi
      # Uncomment if the workspace data storage server connection uses NFSv3.
      # volumeName: pseven-ns-workspacedata-nfs3-pv
      # Uncomment if the workspace data storage server connection uses NFSv4.
      # volumeName: pseven-ns-workspacedata-nfs4-pv
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      # Uncomment if the shared data storage server connection uses NFSv3.
      # name: pseven-ns-shareddata-nfs3-pvc
      # Uncomment if the shared data storage server connection uses NFSv4.
      # name: pseven-ns-shareddata-nfs4-pvc
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ""
      resources:
        requests:
          storage: 1Gi
      # Uncomment if the shared data storage server connection uses NFSv3.
      # volumeName: pseven-ns-shareddata-nfs3-pv
      # Uncomment if the shared data storage server connection uses NFSv4.
      # volumeName: pseven-ns-shareddata-nfs4-pv
    
  2. Apply the settings from your volumes.yaml, specifying the installation namespace (the PersistentVolumeClaim objects are namespaced):

    kubectl create -f volumes.yaml -n pseven-ns
    
  3. In values.yaml, edit the deployment's data storage settings:

    • Set the storage.*.persistentVolume.create and storage.*.persistentVolumeClaim.create parameters to false, for each of the data storages (denoted *).
    • Specify storage.*.persistentVolume.name and storage.*.persistentVolumeClaim.name, for each of the data storages (see the names in your volumes.yaml).
  4. If the sizing.runOnKubernetes parameter is false in your values.yaml (default), continue to installation now. Otherwise if you choose to run blocks natively on Kubernetes (sizing.runOnKubernetes is true), complete the setup steps below before you start installation.

Setting up the cluster to run blocks natively (experimental run mode)
  1. Create the ClusterRole for the pSeven Enterprise launcher:

    kubectl create clusterrole pseven-launcher --verb=list,get --resource=nodes
    kubectl patch clusterrole pseven-launcher --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{ "apiGroups": [""], "resources": ["pods"], "verbs": ["list"]}}]'
    
  2. In the installation namespace, create the ServiceAccount for the pSeven Enterprise launcher:

    kubectl create serviceaccount pseven-launcher-sa -n pseven-ns
    
  3. Create a ClusterRoleBinding, which binds the launcher ClusterRole to its ServiceAccount:

    kubectl create clusterrolebinding pseven-ns-launcher --clusterrole=pseven-launcher --serviceaccount=pseven-ns:pseven-launcher-sa
    
  4. In the installation namespace, create the pSeven Enterprise launcher Role and the corresponding RoleBinding:

    kubectl create role pseven-launcher --verb=list,get --resource=pods/log,resourcequotas -n pseven-ns
    kubectl patch role pseven-launcher --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{ "apiGroups": [""], "resources": ["pods"], "verbs": ["list", "get", "create", "delete"]}}]' -n pseven-ns
    kubectl create rolebinding pseven-launcher --role=pseven-launcher --serviceaccount=pseven-ns:pseven-launcher-sa -n pseven-ns
    
  5. Create the pSeven Enterprise priority classes:

    kubectl create priorityclass pseven-deployment-pc --value=1000000 --global-default=false
    kubectl create priorityclass pseven-cronjob-pc --value=500000 --global-default=false --preemption-policy=Never
    kubectl create priorityclass pseven-block-pc --value=500000 --global-default=false --preemption-policy=Never
    
  6. Edit your values.yaml so the deployment uses the objects you have created:

    • Set launcher.serviceAccount.create, launcher.clusterRole.create, launcher.clusterRoleBinding.create to false.
    • Specify the names of the ServiceAccount and ClusterRoleBinding you have created - the launcher.serviceAccount.name and launcher.clusterRoleBinding.name parameters.

Installation

Before you begin with the installation, check you have the following files in the current directory:

  • pseven⁠-⁠{YYYY.MM.DD}.tgz - the Helm chart.
  • values.yaml - the deployment configuration file with your settings (see Deployment configuration).
  • pseven⁠-⁠secrets.yaml - the secrets file for this deployment (see Secrets).

Install with the helm install command, see command example later. In this command:

  • pseven-rl - release name. Specify a name to identify your deployment. The example commands in this documentation use the release name pseven-rl. You can specify a different name; if so, replace pseven-rl with that name in the example commands.
  • pseven-ns - installation namespace. The example commands in this documentation identify it as pseven-ns. You can specify a different namespace; if so, replace pseven-ns with that namespace in the example commands.

To install:

  1. Check that the dedicated namespace for pSeven Enterprise installation exists:

    kubectl get namespace
    
  2. Ensure a clean environment:

    helm uninstall pseven-rl -n pseven-ns
    

    This command, for example, will remove the leftovers of the previous installation, if you are reinstalling pSeven Enterprise after an unsuccessful installation attempt.

  3. Install with values.yaml and pseven⁠-⁠secrets.yaml you have prepared:

    helm install pseven-rl pseven-{YYYY.MM.DD}.tgz -f values.yaml -f pseven-secrets.yaml -n pseven-ns --timeout 60m --wait --debug | tee pseven-{YYYY.MM.DD}.log
    

    Note the -⁠-⁠timeout option in the example installation command. That option is required because Helm will need to load several GB of images from the Registry, so the Helm's default operation timeout (5 minutes) is usually too low.

    Keep the installation log file (pseven⁠-⁠{YYYY.MM.DD}.log) - it is required for troubleshooting installation errors.

  4. While helm install runs, watch the console log for errors and monitor state of Kubernetes resources - for example, using the kubectl get all command. If you see any errors or find that Kubernetes resource states do not change over an extended time period, refer to the Technical support section.

  5. When installation enters the final stage, Helm shows a finalization message with a few commands to get the network addresses of your deployment. Run those commands and save the following from their output (note that all URLs include a port number):

    • pSeven Enterprise sign-in URL ({sign-in URL})
    • pSeven Enterprise admin URL ({admin URL})
    • pSeven Enterprise health check endpoint
    • The list of worker node addresses

    Important

    Do not give out the pSeven Enterprise sign-in URL to end users. The users should connect to a reverse proxy, which forwards the user requests to this URL.

  6. Wait while Kubernetes launches the pSeven Enterprise services. After a few minutes, go to the pSeven Enterprise sign-in URL. The user sign-in page should load. Sign-in will be blocked at the moment, because you have not added a license yet.

    If the sign-in page does not load, refer to the Technical support section.

  7. Before you proceed, back up the following files:

    • pseven⁠-⁠{YYYY.MM.DD}.log - the installation log
    • values.yaml with your settings
    • pseven⁠-⁠secrets.yaml

Admin access

pSeven Enterprise provides a built-in admin account for performing the initial administrative tasks, with the following default credentials:

  • Username: admin
  • Password: admin
  • Email: admin@admin.admin

Secure the built-in admin account immediately after installation:

  1. Open the pSeven Enterprise admin URL with a web browser and sign in with the default admin credentials. You will be prompted to set a new password for the built-in admin account. After you set the password, the Site administration page will open.
  2. In the Site administration page header on the right, click Manage users. You will be redirected to the Users page of the pSeven Enterprise Admin Console.
  3. Select the Manage account command from the user menu in the upper right corner of the Users page. The Edit Account page will open.
  4. In the Email field on the Edit Account page, enter a valid email address, then save your changes on that page.

Important

Never disable the built-in admin account unless you have another admin account active, and never revoke admin rights from all accounts or disable all admin accounts. Otherwise, admin access to pSeven Enterprise will be entirely blocked.

License

Each pSeven Enterprise license is bound to a specific Installation ID. The Installation ID is unique for every deployment and is generated during installation.

  1. Open the pSeven Enterprise admin URL and sign in.
  2. Copy the ID string from the page header. Example header:

    pSeven Enterprise Admin Site (Installation ID: 20f0dc441f73404fb70ee963806c272e)

  3. Send the Installation ID to your corresponding pSeven SAS or VAR account manager. You shall receive the pSeven Enterprise license file for your Installation ID.

  4. Having received your license file, sign in as admin and open the license settings page (URL: {admin URL}/app_auth/license/add/).
  5. Open the license file, copy its contents into the License source field, and click the Save button at the bottom of the page.

User access

If you have set up a reverse proxy, then the end users should connect only to the reverse proxy server and must not be able to access pSeven Enterprise directly (by the sign-in URL).

Test your reverse proxy configuration:

  1. Connect to the reverse proxy with a web browser. You should get redirected to the pSeven Enterprise user sign-in page.
  2. Sign in using the admin credentials. The pSeven Enterprise Studio user interface should open. This step requires a license - without it, the user sign-in is blocked by pSeven Enterprise.

User storage quotas

If you have enabled disk quota support on the user data storage NFS server, pSeven Enterprise should detect this and enable user storage quota management (see Enabling storage quota).

  1. Sign in as admin and go to pSeven Enterprise settings (URL: {admin URL}/app_pseven/settings/).
  2. On the Settings page, verify that the value in the "Quota management available" column reads "True".
  3. Set the default user storage quota:
  4. Enter quota (in megabytes) in the "Default user storage quota" field.
  5. Click the Update users quota button, and then click Save.

After you create user accounts, you can also set individual quotas for specific users, overriding the default quota setting for those users. For details, see Applying user storage quotas.

Verify installation

To verify your installation, you need to sign in to the user interface.

  1. Connect to the reverse proxy with a web browser and sign in using the admin credentials.
  2. Perform the tests described in this section. All tests are required. Your deployment cannot be considered successful unless all these tests pass.
  3. If your deployment does not pass any of the tests, contact Technical support for advice.
  4. If all tests have passed, remove the end-user permissions from the default admin account - see Admin account configuration. This is required to exclude the admin from the licensed users count.

File upload test

To test general operation, try uploading a file to user Home. The file size should be a few MB.

  1. Drag a file to the Explorer pane on the left, or use the Upload files... command from the pane's menu.
  2. Wait for the upload to complete.
  3. Delete the uploaded file.

The test is considered passed if the file upload was successful.

  • If you get error 413 or other while uploading a file, there may be an issue with the reverse proxy, load balancer or ingress controller in your environment (see Reverse proxy configuration notes).
  • If the file upload never finishes, there may be an issue with your NFS server. Check to see if your NFS server meets the requirements and is configured as described in the NFS server section.

AppsHub test

To test user workflow execution, try running an example from AppsHub.

  1. Switch to AppsHub: click Studio in the top navigation bar, then select AppsHub.
  2. In AppsHub, open the Disk optimization example: hover over its thumbnail and click the icon.
  3. In the example, scroll down and click the Calculate button at the bottom. The button title should change to Deploying workflow..., then Setting parameters..., then Running workflow...
  4. Wait a few minutes for the workflow to complete.

    Warning

    Do not refresh or close the browser tab with the example as this cancels the calculation.

  5. After the calculation finishes, the example shows results at the bottom, and the button title changes back to Calculate. Scroll down to see the results.

The test is considered passed if you see the Results section with diagrams and result values.

  • If you get an error while running the example, or calculation never finishes, your deployment does not function properly.

Extension nodes

pSeven Enterprise supports extension nodes, meaning that Windows hosts, which are not Kubernetes nodes, can be set up to connect with the cluster and receive pSeven tasks. These nodes can be used to run platform-dependent or node-locked software from pSeven workflows.

During deployment, pSeven Enterprise generates an extension node setup package and saves it to the pSeven Enterprise file storage. See the Extension node deployment guide for the download and installation instructions. This step is optional.

Workspaces

To enable users to publish and run apps from AppsHub, creating workspaces is required. If you have to set up the deployment so it provides a public collection of apps, you will need a public workspace. In such case it is best to create one before you start adding user accounts: this way you will be able to configure that workspace so all users can access it by default.

If you add user accounts first, then you will have to set up user access to a public workspace manually (set workspace permissions for each user). That aside, you can create and set up workspaces any time later.

Creating user accounts

pSeven Enterprise supports integration with external authentication services to work with existing user accounts. Alternatively, you can set up basic authentication in pSeven Enterprise.

Notes

This section discusses some important considerations to be aware of when deploying pSeven Enterprise.

Reverse proxy configuration notes

The pSeven Enterprise entry point for incoming traffic is the edgerouter service (see Network parameters). The deployment publishes the edgerouter service with the NodePort service type (see Publishing Services (ServiceTypes) in the Kubernetes documentation), which exposes the service on each cluster node at a static port. By default, this is port 30080; a different port can be specified by changing the entrypoint.webHttp parameter value in values.yaml.

This configuration gives you the opportunity to set up user access to pSeven Enterprise by routing all incoming traffic from users directly to the node port on any one of the cluster nodes. However the common practice is to use a reverse proxy (load balancer, ingress controller) to handle the incoming traffic. The reverse proxy must be properly configured to handle all traffic between clients and pSeven Enterprise. For instance, an inappropriate configuration of a NGINX-based ingress controller can cause the following issues:

  • The WebSocket protocol is disabled. As a result, users will be unable to work with pSeven Enterprise because a persistent loading screen will appear after signing in, followed by a timeout error.

    To address this issue, enable WebSocket (see WebSocket support in the NGINX Ingress Controller documentation).

  • The default client request body size limit is too tight. As a result, pSeven Enterprise users may encounter a 413 Request Entity Too Large error when uploading files.

    To address this issue, increase the maximum size of the request body (see Custom max body size in the Ingress NGINX Controller documentation).

  • The default proxy buffer size is insufficient to handle responses from pSeven Enterprise. As a result, a 502 Bad Gateway error may occur when opening the Users page of the pSeven Enterprise authentication service interface.

    To address this issue, increase the buffer size (see Proxy buffer size in the Ingress NGINX Controller documentation). The recommended buffer size is 128 KB (128k).

In addition to those described above, you may also need to change other settings depending on the kind of reverse proxy you are using.