Setting Up a Cube to Build to Destination

Question
Answer

How do I configure the cube to be built to the given destination database?

Follow the instructions provided.

Where do I specify the warehouse?

The specification of the warehouse is done system-wide and cannot be done on a cube-by-cube basis.

Can I specify different databases for different cubes?

Yes, as long as you specify them in the cube setup. If not specified, it will use the default that was setup via the CLI command.

What happens if a different database is setup for a viewer vs. a writer in the CLI connection command?
The system does not currently prevent you from configuring this setup, however it is not a valid setup.

Are you required to set the schema name in the cube B2D configuration?

This is optional. If you configure it, the cube build will create the given schema name which you configured. If you do not, an automatic name will be created which will consist of the given cube name and the Sisense Deployment ID for the instances running the build.

For example, "MySQLtoSnowFlake_a7ff63e7c_ad54_4754_8c9a_1ab5fab34d04".

If you configure the name of the schema, it is required to be unique to avoid conflicts from different build processes.

In addition, if you enter characters that are not supported by the destination database, Sisense will change these characters to characters that are supported.

What happens if the schema already exists?

If the schema already exists in the database if the cube was never built, the build will fail and will NOT drop the existing schema. This is to ensure that no existing schema that has not been created by a cube build will be accidentally dropped.

Is there a risk in the process of building a cube with a scheme name that already exists in the destination database that contains the base originating data? Can the process by mistake drop an existing schema?

No, the build process drops only a schema that was created by the building of the cube. Any existing schema cannot be dropped.

What is the S3 bucket used for?

Each build will create a folder within the configured bucket that will have the data files uploaded to it. Once the build is completed, the folder will be deleted. The name of the folder will be the cube name (title).

Given there is only one bucket, how are the data files separated between cubes?

Each cube has its own folder created, and therefore data is separated between folders for each cube.

Will switching an existing cube to use the B2D feature retain all of the cube settings? For example, if Incremental was set, will the setting be applied after the first build?

Yes, all cube settings are retained and not affected by the switchover. However, the first build for the given cube will be forced into running a full build upon the switch over to B2D.

Is the built-in Snowflake/Redshift connector used for B2D?

Yes, the same connector that is used to source data for build is used also to create into the destination database, e.g., Snowflake/Redshift. However once again the configuration values are different for sourcing vs. the B2D feature.

What if for some reason I am using a generic JDBC connector to connect to Snowflake/Redshift?

There is no way to use a Java generic connector setup for Build to Destination, as it uses only the built-in Sisense Snowflake/Redshift connector.

If a generic connector is set up and used in the build for accessing Snowflake/Redshift, it will be used for sourcing the data from these given databases only.

What if the Snowflake/Redshift connector is setup with the old connector framework?

The toggle for the old and new framework is not taken into consideration when connecting to the destination database. Only a new framework is used regardless of the setting.

However, in order to use B2D, you should not toggle the Snowflake/Redshift to use the old framework. Doing so will break the B2D functionality.