Sisense Cloud-Native Linux deployments support autoscaling on AWS deployments using AWS autoscale groups and on Google Cloud Platform deployments on a GKE cluster (see Deploying Sisense on Google GKE). Autoscaling allows you to grow or reduce the size of your deployment as your usage changes.
In Sisense, a minimum clustered deployment is comprised of three nodes. A typical deployment would have two app/query nodes, and one build node, with the infrastructure services replicated across all of them. You can configure as many additional nodes as you need, to handle the typical system load.
When your usage peaks, you can use autoscaling to add or remove nodes from your deployment automatically.
For example, your system may have peak loads at the beginning of the month, when many reports are due. To support the need for many more queries, you may choose to add additional query nodes. Using the Sisense autoscale capabilities, query nodes are added when you need them and removed when they are no longer needed. Another example includes shopping days during the holiday season where you may experience peak usage times. You may need to add additional app nodes and/or query nodes to handle more users accessing Sisense.
The Sisense autoscale capabilities work together with an AWS auto-scale node-group. There are two mechanisms that work together: the cluster auto-scaler that runs on the Kubernetes level and the node group auto-scale that works at the level of AWS and checks the available resources to increase the number of node instances.
Using this mechanism the cluster autoscale can request additional nodes when needed.
For example, you can configure the node group to add additional nodes when the average CPU utilization exceeds a specified threshold. This can be relevant for any of the nodes, but especially the build and query nodes that are CPU heavy.
For the app nodes, you can configure the addition of more nodes, when the application load balancer request count exceeds a specific threshold. This could be a relevant parameter to detect for websites that have peaks user loads. Both the CPU usage and the application load balancer values are defined on the node group auto-scale settings.
Additionally, on the Kubernetes level, the cluster-auto-scaler retrieves additional nodes from the node-group when there are not enough resources to deploy additional pods.
Both of these mechanisms will also remove nodes when they are no longer needed based on the defined parameters.
The additional nodes are added to the default data group, or to the label of the data group, and are automatically used by the data group.
Preparing your Environment for Autoscaling
If you are deploying Sisense across multiple Availability Zones (backed by Amazon EBS volumes) and using the Kubernetes Cluster Autoscaler, you must configure multiple node groups, each scoped to a single Availability Zone.
If you want to dynamically add worker nodes to your Sisense deployment, you need to apply the following node labeling strategy as described below so Sisense can add your nodes.As these nodes are created dynamically by your cloud-based Kubernetes, Sisense require a specific label format to identify and use them.
Prior to Sisense V8.0 for Linux, the format of the labels was:
Node 1 - nodeApplication=true, nodeQuery=true
Node 2 - nodeApplication=true, nodeQuery=true
Node 3 - nodeBuild=true
The new form from Sisense V8.0 for Linux onwards for node labels is as follows:
Node 1 - node-[NAMESPACE] -Application node-[NAMESPACE] -Query
Node 2 - node-[NAMESPACE] -Application node-[NAMESPACE] -Query
Node 3 - node-[NAMESPACE] -Build
Where [NAMESPACE] is your internal Namespace name for Sisense. The commands below can be used to define how your nodes should be labeled for Sisense to recognize them.
kubectl label node NODE_NAME node-NAMESPACE-Application=true --overwrite
kubectl label node NODE_NAME node-NAMESPACE-Query=true --overwrite
kubectl label node NODE_NAME node-NAMESPACE-Build=true --overwrite
When deploying Sisense in your Kubernetes environment, in the Configuration YAML file, the cloud_auto_scaler parameter must be set to true. Sisense then manages the scaling of nodes as needed, so if a node falls or your load is higher than normal, additional nodes can be added. You can configure when to scale your nodes in the AWS configuration script described in Deployment Script for Sisense on Amazon EKS and Deploying Sisense on Amazon EKS.
To learn about autoscaling and to see it in action, see the webinar below.
In this webinar, Sisense provides a demonstration of the Sisense autoscale capabilities on a very large deployment including 1000 dashboards, 1000 ElastiCubes, shared with 10,000 users associated with 1000 user groups.
As part of this webinar, Sisense has provided scripts you can use and run to set up your own EKS with Fsx deployment. The scripts are located in an archive that you can download here:
From this archive, you can run the following source script:
The demonstration starts with six nodes (two app nodes, two query nodes and two build nodes). Each of the nodes has eight cores and 32GB RAM.
The system was defined so that ElastiCubes had an idle timeout, and would shut-down when they weren’t in-use. 6 nodes were enough for idle time. Building the ElastiCubes utilized additional CPU and RAM causing additional nodes to be utilized increasing the size of the deployment and adding an additional four query nodes, an additional two build nodes, increasing the number of nodes to twelve. The additional query and build nodes are terminated when they are no longer needed.
When scaling new nodes for builds, if you do not have enough resources to support a new pod, the build pod may become stuck in a pending state and is then killed before a new pod is loaded. You can change the default timeout of 90 seconds to 360 seconds to allow enough time for a new pod to be started with the following command:
si config set -key management.TimeToWaitBeforeCheckIfCubeStartedSeconds -new-value 360
To access the Sisense CLI, see Using Sisense CLI Commands.