Skip to main content

Quick Start

Vault Configuration#

SOLE supports reading values and using credentials stored in vault to perform actions in Snowflake.

By default, SOLE reads values from the following location/map in vault:

SNOWFLAKE:  SOLE:    ACCOUNT: <account>    # Value set in variable DATAOPS_SOLE_ACCOUNT    USERNAME: <username>  # Value set in variable DATAOPS_SOLE_USERNAME    PASSWORD: <password>  # Value set in variable DATAOPS_SOLE_PASSWORD    ROLE: <role>          # Value set in variable DATAOPS_SOLE_ROLE

The usage of the credentials can be overridden if values are present in different vault path.
In-case the credentials are present in a different path, the DATAOPS_VAULT functionality of the runners can be utilized to fetch the value.

Example: The value of account is present in vault key SNOWFLAKE.PRODUCTION.ACCOUNT and role is present in SNOWFLAKE.INGESTION.ROLE.
In this case, the variable DATAOPS_SOLE_ACCOUNT can be set to DATAOPS_VAULT(SNOWFLAKE.PRODUCTION.ACCOUNT) and DATAOPS_SOLE_ROLE to DATAOPS_VAULT(SNOWFLAKE.INGESTION.ROLE) in the variables section of the job or config.yaml.

variables:  DATAOPS_SOLE_ACCOUNT: DATAOPS_VAULT(SNOWFLAKE.PRODUCTION.ACCOUNT)  DATAOPS_SOLE_ROLE: DATAOPS_VAULT(SNOWFLAKE.INGESTION.ROLE)
info

As SOLE utilizes terraform, the credentials/variables are converted to terraform variables(_TF_VAR__ appended at front of each variables).
The variable DATAOPS_SOLE_ACCOUNT is duplicated with name TF_VAR_DATAOPS_SOLE_ACCOUNT and similarly for other credentials.
If any value already exists to such variables, it would be overridden

Pipeline/Project Variables#

The following variables should be set to valid values in the pipelines/includes/config/variables.yml file for successful execution of SOLE.

VariableRequiredDescriptionValue Example
DATAOPS_PREFIXREQUIREDPrefix to be added before Account-Level Objects and Non-default databasesDATAOPS_DEMO
DATAOPS_DATABASE_MASTERREQUIREDName of the Default Database in Production environmentDATAOPS_DEMO_PROD
DATAOPS_ENV_NAMEREQUIRED - Should be initialized by before scriptEnvironment specific Suffix added to all Account-Level Objects and Non-default databases-
CONFIGURATION_DIRREQUIREDPath of directory where configuration for SOLE are present$CI_PROJECT_DIR/dataops/snowflake
ARTIFACT_DIRECTORYOptionalPath of directory where artifacts from SOLE are uploaded to$CI_PROJECT_DIR/snowflake-artifacts
SET_TERRAFORM_KEYS_TO_ENVOptionalIf set, exports credentials from vault to environment1
LIFECYCLE_STATE_RESETOptionalIf set, deletes existing state and re-initializes state. Useful if state is corrupted(Eg: Object deleted externally but existing in state)1
DATAOPS_DEBUGOptionalEnables Debug Logging1

Jobs Setup#

Below are example jobs definitions to perform different type of operations with SOLE.

Individual Jobs#

These set of jobs perform single action in each job.

For each action there is a job definition with supported variables. The order of the jobs is important as output from one is required by subsequent jobs.

Compile Job#

This jobs complies User configuration and generates terraform supported configuration with complete namespace, dependency and reference resolution.

Compile Configuration:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: COMPILE  stage: "Compile Configuration"  script:    - export LIFECYCLE_OBJECT_SUFFIX=$SNOWFLAKE_SUFFIX    - /dataops  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY      - $CI_PROJECT_DIR/dataops/snowflake

Validate Job#

These set of jobs validate generated configurations for each resource group.

Validate <Resource-Group>:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: VALIDATE    LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP>  stage: "Validate Configurations"  script:    - /dataops  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY

The value of RESOURCE_GROUP must be one of the following:

  • ACCOUNT_LEVEL
  • DATABASE
  • DATABASE_LEVEL
  • GRANT

Validate for each resource group should be setup in order to validate all the generated configuration.

Plan Jobs#

These set of jobs import existing, non-managed objects to local state and generate a plan for Apply Jobs action.

See the Jobs Sequence for order in which the Plan Jobs must be setup

Plan <Resource-Group>:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: PLAN    LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP>  stage: "Plan <Resource-Group> Objects"  script:    - /dataops  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY

The value of RESOURCE_GROUP must be one of the following:

  • ACCOUNT_LEVEL
  • DATABASE
  • DATABASE_LEVEL
  • GRANT

Apply Jobs#

These set of jobs execute the plan generated by the Plan Jobs.

See the Jobs Sequence for order in which the Apply Jobs must be setup

Apply <Resource-Group>:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: APPLY    LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP>  stage: "Apply <Resource-Group> Objects"  script:    - /dataops  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY

The value of RESOURCE_GROUP must be one of the following:

  • ACCOUNT_LEVEL
  • DATABASE
  • DATABASE_LEVEL
  • GRANT

Destroy-Plan Jobs#

These set of jobs log the objects that would be destroyed by SOLE in the Destroy Jobs.

See the Jobs Sequence for order in which the Destroy-Plan Jobs must be setup

Plan-Destroy <Resource-Group>:  extends:    - .not_running_on_master_or_qa    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: PLAN-DESTROY    LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP>  stage: "Clean Up Plan <Resource-Group>"  script:    - /dataops  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY

The value of RESOURCE_GROUP must be one of the following:

  • ACCOUNT_LEVEL
  • DATABASE
  • DATABASE_LEVEL
  • GRANT

Destroy Jobs#

These set of jobs destroy all managed objects for the specified resource group as per logged output in Destroy-Plan Jobs.

See the Jobs Sequence for order in which the Destroy Jobs must be setup

Destroy <Resource-Group>:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: DESTROY    LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP>  stage: "Clean Up <Resource-Group>"  script:    - /dataops  rules:    # If merging to master, never allow the destroy to be run    - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'      when: never      # If running in master, never allow the destroy to be run    - if: '$CI_COMMIT_REF_NAME == "master"'      when: never      # If merging to qa, never allow the destroy to be run    - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"'      when: never      # If running in qa, never allow the destroy to be run    - if: '$CI_COMMIT_REF_NAME == "qa"'      when: never      # For other runs, this step is manual    - when: manual  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY  allow_failure: false

The value of RESOURCE_GROUP must be one of the following:

  • ACCOUNT_LEVEL
  • DATABASE
  • DATABASE_LEVEL
  • GRANT

Jobs Sequence#

The sequence of the jobs should be in the following order for a successful execution

  1. Compile
  2. Validate All Resource Groups
  3. Plan Account-Level and Database
  4. Apply Account-Level and Database
  5. Plan Database-Level
  6. Apply Database-Level
  7. Plan Grants
  8. Apply Grants
  9. Destroy-Plan Database-Level
  10. Destroy Database-Level
  11. Destroy-Plan Account Level and Database
  12. Destroy Account Level and Database

Stages#

The following stage setup can be referred for quick setup

stages:  - "Compile Configuration"  - "Validate Configurations"  - "Plan Account-Level Objects"  - "Apply Account-Level Objects"  - "Plan Database-Level Objects"  - "Apply Database-Level Objects"  - "Plan Objects Grants"  - "Apply Objects Grants"  - "Clean Up Plan Database-Level"  - "Clean Up Database-Level"  - "Clean Up Plan Account-Level"  - "Clean Up Account-Level"
info

The above stage section just focuses on SOLE. All other stages required by other runners/jobs are ignored

Aggregate Jobs#

An alternative to executing each action individually and managing the order of execution for Resource Groups, Aggregate Jobs can be used.

Aggregate Jobs combines all Setup and Tear-down actions in a single Jobs respectively.

This reduces the level of management required to SOLE and provides an easy workflow for Lifecycle Management.

Setup Aggregate Job#

This Job handles all actions related to Creation and Update of Managed objects in a sequence for successful creation

Compilation, Import, Plan Generation and Plan Apply are all executed in a single Job.

Aggregated Action:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: AGGREGATE  stage: "Apply Account Objects"  script:    - /dataops  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY

Destroy Aggregate Job#

This Job handles all actions related to deletion of Managed objects in a sequence for successful deletion.

Aggregated-Destroy Action:  extends:    - .agent_tag  image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE  variables:    LIFECYCLE_ACTION: AGGREGATE-DESTROY  stage: "Clean Up Account-Level"  script:    - export LIFECYCLE_OBJECT_SUFFIX=$SNOWFLAKE_SUFFIX    - /dataops  rules:    # If merging to master, never allow the destroy to be run    - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'      when: never      # If running in master, never allow the destroy to be run    - if: '$CI_COMMIT_REF_NAME == "master"'      when: never      # If merging to qa, never allow the destroy to be run    - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"'      when: never      # If running in qa, never allow the destroy to be run    - if: '$CI_COMMIT_REF_NAME == "qa"'      when: never      # For other runs, this step is manual    - when: manual  artifacts:    when: always    paths:      - $ARTIFACT_DIRECTORY
Last updated on