Quick Start
#
Vault ConfigurationSOLE supports reading values and using credentials stored in vault to perform actions in Snowflake.
By default, SOLE reads values from the following location/map in vault:
SNOWFLAKE: SOLE: ACCOUNT: <account> # Value set in variable DATAOPS_SOLE_ACCOUNT USERNAME: <username> # Value set in variable DATAOPS_SOLE_USERNAME PASSWORD: <password> # Value set in variable DATAOPS_SOLE_PASSWORD ROLE: <role> # Value set in variable DATAOPS_SOLE_ROLE
The usage of the credentials can be overridden if values are present in different vault path.
In-case the credentials are present in a different path, the DATAOPS_VAULT functionality of the runners can be utilized to fetch the value.
Example:
The value of account is present in vault key SNOWFLAKE.PRODUCTION.ACCOUNT
and role is present in SNOWFLAKE.INGESTION.ROLE
.
In this case, the variable DATAOPS_SOLE_ACCOUNT
can be set to DATAOPS_VAULT(SNOWFLAKE.PRODUCTION.ACCOUNT)
and DATAOPS_SOLE_ROLE
to DATAOPS_VAULT(SNOWFLAKE.INGESTION.ROLE)
in the variables section of the job or config.yaml.
variables: DATAOPS_SOLE_ACCOUNT: DATAOPS_VAULT(SNOWFLAKE.PRODUCTION.ACCOUNT) DATAOPS_SOLE_ROLE: DATAOPS_VAULT(SNOWFLAKE.INGESTION.ROLE)
info
As SOLE utilizes terraform, the credentials/variables are converted to terraform variables(_TF_VAR__ appended at front of each variables).
The variable DATAOPS_SOLE_ACCOUNT
is duplicated with name TF_VAR_DATAOPS_SOLE_ACCOUNT
and similarly for other credentials.
If any value already exists to such variables, it would be overridden
#
Pipeline/Project VariablesThe following variables should be set to valid values in the pipelines/includes/config/variables.yml file for successful execution of SOLE.
Variable | Required | Description | Value Example |
---|---|---|---|
DATAOPS_PREFIX | REQUIRED | Prefix to be added before Account-Level Objects and Non-default databases | DATAOPS_DEMO |
DATAOPS_DATABASE_MASTER | REQUIRED | Name of the Default Database in Production environment | DATAOPS_DEMO_PROD |
DATAOPS_ENV_NAME | REQUIRED - Should be initialized by before script | Environment specific Suffix added to all Account-Level Objects and Non-default databases | - |
CONFIGURATION_DIR | REQUIRED | Path of directory where configuration for SOLE are present | $CI_PROJECT_DIR/dataops/snowflake |
ARTIFACT_DIRECTORY | Optional | Path of directory where artifacts from SOLE are uploaded to | $CI_PROJECT_DIR/snowflake-artifacts |
SET_TERRAFORM_KEYS_TO_ENV | Optional | If set, exports credentials from vault to environment | 1 |
LIFECYCLE_STATE_RESET | Optional | If set, deletes existing state and re-initializes state. Useful if state is corrupted(Eg: Object deleted externally but existing in state) | 1 |
DATAOPS_DEBUG | Optional | Enables Debug Logging | 1 |
#
Jobs SetupBelow are example jobs definitions to perform different type of operations with SOLE.
#
Individual JobsThese set of jobs perform single action in each job.
For each action there is a job definition with supported variables. The order of the jobs is important as output from one is required by subsequent jobs.
#
Compile JobThis jobs complies User configuration and generates terraform supported configuration with complete namespace, dependency and reference resolution.
Compile Configuration: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: COMPILE stage: "Compile Configuration" script: - export LIFECYCLE_OBJECT_SUFFIX=$SNOWFLAKE_SUFFIX - /dataops artifacts: when: always paths: - $ARTIFACT_DIRECTORY - $CI_PROJECT_DIR/dataops/snowflake
#
Validate JobThese set of jobs validate generated configurations for each resource group.
Validate <Resource-Group>: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: VALIDATE LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP> stage: "Validate Configurations" script: - /dataops artifacts: when: always paths: - $ARTIFACT_DIRECTORY
The value of RESOURCE_GROUP must be one of the following:
ACCOUNT_LEVEL
DATABASE
DATABASE_LEVEL
GRANT
Validate for each resource group should be setup in order to validate all the generated configuration.
#
Plan JobsThese set of jobs import existing, non-managed objects to local state and generate a plan for Apply Jobs action.
See the Jobs Sequence for order in which the Plan Jobs must be setup
Plan <Resource-Group>: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: PLAN LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP> stage: "Plan <Resource-Group> Objects" script: - /dataops artifacts: when: always paths: - $ARTIFACT_DIRECTORY
The value of RESOURCE_GROUP must be one of the following:
ACCOUNT_LEVEL
DATABASE
DATABASE_LEVEL
GRANT
#
Apply JobsThese set of jobs execute the plan generated by the Plan Jobs.
See the Jobs Sequence for order in which the Apply Jobs must be setup
Apply <Resource-Group>: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: APPLY LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP> stage: "Apply <Resource-Group> Objects" script: - /dataops artifacts: when: always paths: - $ARTIFACT_DIRECTORY
The value of RESOURCE_GROUP must be one of the following:
ACCOUNT_LEVEL
DATABASE
DATABASE_LEVEL
GRANT
#
Destroy-Plan JobsThese set of jobs log the objects that would be destroyed by SOLE in the Destroy Jobs.
See the Jobs Sequence for order in which the Destroy-Plan Jobs must be setup
Plan-Destroy <Resource-Group>: extends: - .not_running_on_master_or_qa - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: PLAN-DESTROY LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP> stage: "Clean Up Plan <Resource-Group>" script: - /dataops artifacts: when: always paths: - $ARTIFACT_DIRECTORY
The value of RESOURCE_GROUP must be one of the following:
ACCOUNT_LEVEL
DATABASE
DATABASE_LEVEL
GRANT
#
Destroy JobsThese set of jobs destroy all managed objects for the specified resource group as per logged output in Destroy-Plan Jobs.
See the Jobs Sequence for order in which the Destroy Jobs must be setup
Destroy <Resource-Group>: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: DESTROY LIFECYCLE_MANAGE_OBJECT: <RESOURCE_GROUP> stage: "Clean Up <Resource-Group>" script: - /dataops rules: # If merging to master, never allow the destroy to be run - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"' when: never # If running in master, never allow the destroy to be run - if: '$CI_COMMIT_REF_NAME == "master"' when: never # If merging to qa, never allow the destroy to be run - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"' when: never # If running in qa, never allow the destroy to be run - if: '$CI_COMMIT_REF_NAME == "qa"' when: never # For other runs, this step is manual - when: manual artifacts: when: always paths: - $ARTIFACT_DIRECTORY allow_failure: false
The value of RESOURCE_GROUP must be one of the following:
ACCOUNT_LEVEL
DATABASE
DATABASE_LEVEL
GRANT
#
Jobs SequenceThe sequence of the jobs should be in the following order for a successful execution
- Compile
- Validate All Resource Groups
- Plan Account-Level and Database
- Apply Account-Level and Database
- Plan Database-Level
- Apply Database-Level
- Plan Grants
- Apply Grants
- Destroy-Plan Database-Level
- Destroy Database-Level
- Destroy-Plan Account Level and Database
- Destroy Account Level and Database
#
StagesThe following stage setup can be referred for quick setup
stages: - "Compile Configuration" - "Validate Configurations" - "Plan Account-Level Objects" - "Apply Account-Level Objects" - "Plan Database-Level Objects" - "Apply Database-Level Objects" - "Plan Objects Grants" - "Apply Objects Grants" - "Clean Up Plan Database-Level" - "Clean Up Database-Level" - "Clean Up Plan Account-Level" - "Clean Up Account-Level"
info
The above stage section just focuses on SOLE. All other stages required by other runners/jobs are ignored
#
Aggregate JobsAn alternative to executing each action individually and managing the order of execution for Resource Groups, Aggregate Jobs can be used.
Aggregate Jobs combines all Setup and Tear-down actions in a single Jobs respectively.
This reduces the level of management required to SOLE and provides an easy workflow for Lifecycle Management.
#
Setup Aggregate JobThis Job handles all actions related to Creation and Update of Managed objects in a sequence for successful creation
Compilation, Import, Plan Generation and Plan Apply are all executed in a single Job.
Aggregated Action: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: AGGREGATE stage: "Apply Account Objects" script: - /dataops artifacts: when: always paths: - $ARTIFACT_DIRECTORY
#
Destroy Aggregate JobThis Job handles all actions related to deletion of Managed objects in a sequence for successful deletion.
Aggregated-Destroy Action: extends: - .agent_tag image: $DATAOPS_SNOWFLAKELIFECYCLE_RUNNER_IMAGE variables: LIFECYCLE_ACTION: AGGREGATE-DESTROY stage: "Clean Up Account-Level" script: - export LIFECYCLE_OBJECT_SUFFIX=$SNOWFLAKE_SUFFIX - /dataops rules: # If merging to master, never allow the destroy to be run - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"' when: never # If running in master, never allow the destroy to be run - if: '$CI_COMMIT_REF_NAME == "master"' when: never # If merging to qa, never allow the destroy to be run - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"' when: never # If running in qa, never allow the destroy to be run - if: '$CI_COMMIT_REF_NAME == "qa"' when: never # For other runs, this step is manual - when: manual artifacts: when: always paths: - $ARTIFACT_DIRECTORY