Recovering Sisense in Linux

Applicable to Sisense on Linux

Accessing the Sisense CLI

To create a backup file, and later to restore it, you need to access the Sisense CLI that was installed with Sisense.

To access the Sisense CLI:

  1. Copy to your home directory with the following command:
    kubectl cp -n sisense $(kubectl -n sisense get pods -l app="management" -o custom-columns=""):/etc/ ~/
  2. Execute the following:
    source ~/
  3. Login to sisense with admin credential on the ip and port of the application
    login_sisense <Server_URL>:<port> <UserNameEmail> <Password>
    login_sisense <Server_URL>:<port> <UserNameEmail>

Backing Up Sisense in Linux

As part of regular maintenance or before making significant changes to your deployment, you should back up Sisense. To back up your Sisense deployment, you create a tarball, an archive of your Sisense configuration stored in a .tar file. After this file is created, you can restore it when needed.

Note: The following are not backed up when performing the procedure below. These should be backed up manually:

To create a backup archive of Sisense:

  1. Connect to your master instance of Sisense with the SSH protocol using the user that you used to install Sisense. Make sure you are in the home folder where Sisense was installed.
  2. Create your Sisense backup tarball with the following Sisense Linux command:
    si system backup [-include-farm no]
    A backup archive is created in the following directory:
  3. Any scheduled builds you have must be backed up manually with the following command:
    kubectl -n sisense get cronjobs.batch -o json > cronjobs.json

Retrieving a Backup from Shared Storage

Sisense stores backups on your shared storage by default, but to ensure that you can use your backups in case your shared storage becomes inaccessible, you can pull backups from your shared storage. Sisense provides a script below that you can download and modify to create backups on external resources.

To retrieve a backup archive from shared storage:

  1. Create a CronJob in the primary server of Sisense to pull your backup tarball outside from the shared storage.
  2. Download the following script:
    wget chmod a+x
  3. Open your CronJob with the following command:
    Crontab -e
  4. Add a line to schedule a CRON job. In the example below, this line schedules a daily pull at 2AM:
    0 2 * * * /bin/bash ~/ <namespace> <user> <password>
    The script saves a backup by default in: targetDir=~/backup/You can change this value to store additional copies to sources external to your shared storage.

Restoring a Sisense Backup

To restore a Sisense backup:

  1. Log in to the Sisense CLI.
  2. Copy the backup file to the shared storage:
    kubectl -n sisense cp sisense_assets_collector_some_date.tar.gz $(kubectl -n sisense get pods -l app="management" -o custom-columns=""):/opt/sisense/storage/system_backups/
  3. Restore the backup tarball of Sisense:
    si system restore -file /opt/sisense/storage/system_backups/sisense_assets_collector_some_date.tar.gz
  4. Restore your scheduled build:
    kubectl create -f cronjobs.json

Recovering the Sisense Environment after a General Failure

If you experience problems in your Sisense Linux deployment, for example, your data becomes corrupted or you experience storage failures, you can recover Sisense and your data if you have created a backup file.

The procedure below describes how to remove your current instance of Sisense and restore a previous version with the config.yaml file you created when Sisense was installed.

This process describes how to recover your Sisense environment, however, the following are not restored and must be restored manually:

To recover an instance of Sisense:

  1. In Linux, remove the corrupted Sisense and user data.
    1. Connect to your primary instance of Sisense with the SSH protocol using the user that you used to install Sisense.
    2. Make sure you are in the home directory where Sisense was installed. If you are not, open the directory where Sisense was installed.
    3. Open the config.yaml file and edit it with the command:
      vi config.yaml
    4. Set the following values to true to remove the cluster and your Sisense user data:

    5. Run the updated script with the following command:
      bash ./ config.yaml
    6. Reboot your Kubernetes Servers with the following command:
      ssh <instance_ip_address> sudo reboot
  2. Reinstall Sisense.
    1. Open and edit the config.yaml file.
    2. Change the following values to false:

  3. Restore your Sisense data with the following commands:
    1. Load completion and login Sisense swagger in order to run SI commands.
    2. Validate that the completion script is in your current working directory. Type the command:
      ls and search for the file.
    3. Type the following:
  4. Restore your Sisense backup tarball with the following Sisense Linux command:
    si system restore -path sisense_assets_collector_some_date [-include-farm no]
  5. Restore the following files manually:
    1. Static datasource files, such as .xls and .csv files
    2. Connector manifest files
    3. SSL certificates and private keys
    4. Reschedule any scheduled ElastiCube builds

Your Sisense deployment is now restored.

Expanding PersistentVolumes (PV)

If you need to expand a PersistentVolume (PV), for example, if you need more disk space for Sisense or the Sisense application database, you can modify the config.yaml file created when Sisense was installed and update the system.

To extend persistent volumes:

  1. Connect to your Sisense instance (the primary) with SSH protocol using the same user that used to install Sisense.
  2. Go to the Sisense installation folder where the install script is located.
  3. Edit the config.yaml.
    1. Update the parameter sisense_disk_size as required.
      Note: Sisense_disk_size is the storage PVC.
    2. Save the file and run the installation with the parameter in the config.yaml of with the value of update as true.
  4. Run the installation.