Planning Your Configuration

Applicable to Sisense on Linux

Previous Step:

Sisense supports two deployment topologies on Linux: a single-node deployment, and a multi-node deployment. This page describes the two supported deployments and the differences between them.

In both deployments, the Sisense application is provided as Docker containers. The orchestration between these containers is managed through Kubernetes as displayed in the diagram below.

This diagram illustrates how Sisense utilizes Kubernetes for redundancy and load balancing regardless of the number of machines. When you initialize Sisense, it is installed as a Kubernetes namespace. The services and ElastiCubes that make up the Sisense application are pods spread across the node or nodes in your deployment.

In addition, Sisense supports many shared storage options for data distribution, consistency, and resiliency, including:

  1. Amazon AWS: GlusterFS/NFS FSx
  2. Microsoft Azure: GlusterFS
  3. Google Cloud Platform: GlusterFS/NFS


In its most basic configuration, Sisense can run on a single Linux machine. The deployment scenario is straightforward to set up, maintain, and upgrade.

The single server deployment will deploy all Sisense nodes on a single server using Kubernetes orchestration. It can support a sandbox environment for development and testing, but it is also a valid configuration for some production environments.

The deployment will utilize the storage available locally on the host server.


Sisense supports the configuration of multiple machines, or nodes. In a multiple node deployment, you have one build-node, two application/query nodes. You can add additional build and build nodes as needed. In addition, Sisense supports shared storage.

As with a single-node deployment, Kubernetes is used to manage the orchestration of Docker containers across a namespace. Kubernetes is also used for load balancing within the Sisense deployment.

The minimum deployment size for a Sisense multi-server deployment is three servers. In three server deployments, two servers will each run as an application and query node. The third server will run as the build node.

In a basic three server deployment, one of the servers is a dedicated build node. The other two servers both provide the functionality of an application and a query node. The infrastructure services must be deployed on each of the three servers, to adequately provide high-availability and redundancy of the infrastructure modules. Additional query and build nodes can be added to the deployment, to support additional load.

Sisense uses Helm as the installer for the Sisense application. One of the Helm charts labels a Kubernetes node for the application. Helm charts have been configured to add labels to the Kubernetes nodes for application and build nodes. Based on the label, Kubernetes loads the Sisense pods on the correct node on the cluster. Kubernetes also places ElastiCubes on each node according to dynamic labels given in the data group.

Next Steps