Deployment Script for Sisense on Amazon EKS
  • 14 Jun 2022
  • 4 Minutes to read
  • Dark
    Light

Deployment Script for Sisense on Amazon EKS

  • Dark
    Light

This page describes how to configure Amazon AWS for Sisense and includes an example script to deploy Sisense on Amazon EKS. The example can be downloaded and customized for your specific needs.

The example script below installs AWS CLI(awscli), labels your worker nodes, sets up an EKS cluster with the EKS command line (eksctl),and sets up FSx with the associated IAM role.

After you have configured Amazon AWS, you can then deploy Sisense as described in Deploying Sisense on Amazon EKS.

Prerequisites

  • Amazon Linux 2 OS
  • Bastion should be "Amazon Linux" type -The default linux user: ec2-user
  • Bastion AWS Profile must be configured as default and the region should be the same as the provisioned one
  • cat /.aws/credentials
  • cat /.aws/config

Auto Scaling

Sisense supports auto-scaling for your EKS nodes using AWS EKS auto-scaling capabilities. You can configure when to add or remove nodes in the following section of the AWS script:

eksctl create nodegroup \

In this section, you specify the node type, the number of nodes, the minimum number of nodes etc. Also, in this section, you must define the labels that new nodes get when the autoscaler creates them. This is required as it enables Sisense to determine what type of node should be created, build, query or application. To define the labels, set the value of --node-labels to the label of the new nodes, for example:

--node-labels "node-sisense-Application=true,node-sisense-Query=true" \

To configure Amazon AWS for Sisense:

  1. In Linux, download the sample Sisense AWS Configuration script prepared by Sisense.

curl -O https://data.sisense.com/linux/scripts/sisense_full_eks_fsx_v2.sh

OR

Download the script here .
2. Edit the script to match your use case. Below you can see a copy of the full script. After you have configured Amazon AWS, you can then deploy Sisense as described in Deploying Sisense on Amazon EKS.

Note :

Sisense labels are based on the default namespace sisense . For a different namespace name, change the labels as shown in the following example:

node-NAMESPACE_NAME-Application / node-NAMESPACE_NAME-Query / node-NAMESPACE_NAME-Build

  1. Run the script using the following command:

sisense_full_eks_fsx_v2.sh <name>

where <name> is the name of your cluster prefix. For generating a cluster named "cluster1-EKS", the command is:

sisense_full_eks_fsx_v2.sh cluster1

#!/usr/bin/env bash

CLUSTER=$1

# FSxType -- SCRATCH_1, SCRATCH_2 , PERSISTENT_1
FSxType=PERSISTENT_1

## Installing pip
if [ -f /usr/bin/yum ]; then sudo yum -y -q install python-pip unzip jq && sudo pip install yq; fi
if [ -f /usr/bin/apt ]; then sudo apt update && sudo apt install --yes python-pip unzip jq && sudo pip install yq; fi

## Installing awscli
curl --silent --location "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip -q awscliv2.zip
sudo ./aws/install --update
rm -fr awscliv2* aws*/

## Installing eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
source <(eksctl completion bash) 2>/dev/null

## Installing aws-iam-authenticator
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.12/2020-11-02/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/.bin && mv ./aws-iam-authenticator $HOME/.bin/aws-iam-authenticator && export PATH=$HOME/.bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc

## Installaing kubectl
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.12/2020-11-02/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

## aws configure
aws configure
AWS_REGION=$(aws configure get region)

## Provisioning EKS
sudo rm -fr ./${CLUSTER}-KeyPair.pem
aws ec2 delete-key-pair --key-name "${CLUSTER}-KeyPair"
aws ec2 create-key-pair --key-name "${CLUSTER}-KeyPair" --query 'KeyMaterial' --output text > ./${CLUSTER}-KeyPair.pem

## Provisioning EKS
eksctl create cluster \
--name "${CLUSTER}-EKS" \
--version 1.17 \
--zones=${AWS_REGION}a,${AWS_REGION}b,${AWS_REGION}c \
--without-nodegroup

eksctl create nodegroup \
--name "${CLUSTER}-workers-APP-QRY1" \
--cluster "${CLUSTER}-EKS" \
--asg-access \
--managed \
--node-labels "node-sisense-Application=true,node-sisense-Query=true" \
--node-type m5a.2xlarge \
--nodes 1 \
--nodes-min 1 \
--nodes-max 3 \
--node-volume-size 150 \
--ssh-access \
--node-private-networking \
--node-zones=${AWS_REGION}a \
--ssh-public-key "${CLUSTER}-KeyPair"

eksctl create nodegroup \
--name "${CLUSTER}-workers-APP-QRY2" \
--cluster "${CLUSTER}-EKS" \
--asg-access \
--managed \
--node-labels "node-sisense-Application=true,node-sisense-Query=true" \
--node-type m5a.2xlarge \
--nodes 1 \
--nodes-min 1 \
--nodes-max 3 \
--node-volume-size 150 \
--ssh-access \
--node-private-networking \
--node-zones=${AWS_REGION}b \
--ssh-public-key "${CLUSTER}-KeyPair"

eksctl create nodegroup \
--name "${CLUSTER}-workers-BLD" \
--cluster "${CLUSTER}-EKS" \
--asg-access \
--managed \
--node-labels "node-sisense-Build=true" \
--node-type m5a.2xlarge \
--nodes 1 \
--nodes-min 1 \
--nodes-max 2 \
--node-volume-size 150 \
--ssh-access \
--node-private-networking \
--node-zones=${AWS_REGION}c \
--ssh-public-key "${CLUSTER}-KeyPair"

## Getting SG,SUBNET
SG=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.securityGroupIds[0]" | sed 's/\"//g')
CSG=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" | sed 's/\"//g')
VPC=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.vpcId" | sed 's/\"//g')
SUBNET=$(aws eks describe-cluster --name "${CLUSTER}-EKS" --query "cluster.resourcesVpcConfig.subnetIds[1]"| sed 's/\"//g')


## Configuring SG
aws ec2 authorize-security-group-ingress --group-id $SG --protocol tcp --port 988 --cidr 172.31.0.0/16
aws ec2 authorize-security-group-ingress --group-id $CSG --protocol tcp --port 988 --cidr 172.31.0.0/16
aws ec2 authorize-security-group-ingress --group-id $SG --protocol tcp --port 988 --cidr 192.168.0.0/16
aws ec2 authorize-security-group-ingress --group-id $CSG --protocol tcp --port 988 --cidr 192.168.0.0/16

## Create FSx
aws fsx create-file-system \
--client-request-token "$CLUSTER" \
--file-system-type LUSTRE \
--storage-capacity 1200 \
--tags Key="Name",Value="Lustre-${CLUSTER}" \
--lustre-configuration "DeploymentType=${FSxType},PerUnitStorageThroughput=200" \
--subnet-ids "$SUBNET" \
--security-group-ids $CSG


## Getting FSx
FSX_DNS_NAME=$(aws fsx describe-file-systems --query 'FileSystems[*].{DNSName:DNSName,Tags:Tags[0].Value==`'Lustre-${CLUSTER}'`}' --output text  | grep True| awk '{print $1}')
FSX_MOUNT_NAME=$(aws fsx describe-file-systems --query 'FileSystems[*].{MountName:LustreConfiguration.MountName,Tags:Tags[0].Value==`'Lustre-${CLUSTER}'`}' --output text  | grep True| awk '{print $1}')

## Gthering EKS kubeconfig
aws eks update-kubeconfig --region "${AWS_REGION}" --name "${CLUSTER}-EKS"

## Output
echo -e "ssh_key path is: ~/${CLUSTER}-KeyPair.pem"
echo -e "kubernetes_cluster_name: ${CLUSTER}-EKS"
echo -e "kubernetes_cluster_location: ${AWS_REGION}"
echo -e "kubernetes_cloud_provider: aws"
echo -e "fsx_dns_name is: ${FSX_DNS_NAME}"
echo -e "fsx_mount_name is: ${FSX_MOUNT_NAME}"

Was this article helpful?