Skip to content

Quick Start Guide for Config0

Summary by Example

  1. Fork the repository
    Fork the config0 contribution repository and name it “digital-ocean”

  2. Set environment variable

    export GITHUB_NICKNAME=<github-username>
    
  3. Clone, prepare, and commit code

    git clone git@github.com:$GITHUB_NICKNAME/digital-ocean.git
    cd digital-ocean
    
    # Remove template files
    rm -rf execgroups/_config0_configs/ec2_server
    rm -rf execgroups/_config0_configs/template/_chrootfiles/var/tmp/terraform
    rm -rf stacks/_config0_configs/aws_ec2_server
    rm -rf stacks/_config0_configs/template
    
    # Prepare DOKS files
    mv sample/doks/OpenTofu execgroups/_config0_configs/template/_chrootfiles/var/tmp/terraform
    mv execgroups/_config0_configs/template execgroups/_config0_configs/doks
    mv sample/doks/Config0/stack stacks/_config0_configs/doks
    
    # Update configuration
    sed -i "s/config0-publish:::do::doks/$GITHUB_NICKNAME:::digital-ocean::doks/g" stacks/_config0_configs/doks/_main/run.py
    
    # Commit changes
    git add .
    git commit -a -m "testing and adding doks terraform sample workflow on config0"
    git push origin main
    
  4. Register with Config0
    On Config0 SaaS UI:
    → Add Stack
    → Register Repo
    → Update Stack

  5. Access your resources

    # Terraform/OpenTofu execgroup that contain "imported" or "glued" code
    https://api-app.config0.com/web_api/v1.0/exec/groups/$GITHUB_NICKNAME/digital-ocean/doks
    
    # Terraform/OpenTofu immutable workflow/stack
    https://api-app.config0.com/web_api/v1.0/stacks/$GITHUB_NICKNAME/doks
    
    # Example:
    https://api-app.config0.com/web_api/v1.0/exec/groups/williaumwu/digital-ocean/doks
    https://api-app.config0.com/web_api/v1.0/stacks/williaumwu/doks
    

Create config0/config0.yml File to Launch

Example for username “williaumwu”:

 ```yaml
 global:
    arguments: 
      do_region: lon1
    metadata:   
      labels:
         general: 
           environment: dev
           purpose: testing
           provider: doks
         doks: 
           platform: kubernetes
           component: manged_k8
 infrastructure:
    doks:
      stack_name: williaumwu:::doks
      arguments:
         doks_cluster_name: config0-authoring-walkthru
         doks_cluster_version: 1.29.1-do.0
         doks_cluster_pool_size: s-1vcpu-2gb-amd
         doks_cluster_autoscale_max: 4
         doks_cluster_autoscale_min: 1
      metadata:
         labels:
           - general
           - doks
         credentials:
           - reference: do-token
             orchestration: true
 ```

OpenTofu Integration Details By Example

By leveraging Config0 helpers, you connect your existing OpenTofu code and create an immutable OpenTofu-based workflow with a single entry point. This entry point can be launched directly or called by other Config0 automation stacks.

The process consists of:

  1. Copying your existing OpenTofu-based code to a repository registered with Config0
  2. Creating a Config0 stack file that references the OpenTofu-based code
  3. Invoking updates to the code changes on Config0

The following example demonstrates converting existing OpenTofu code for Digital Ocean Kubernetes Service (DOKS) into an OpenTofu-based workflow.

Prerequisites - Fork Starter Repository

  1. Fork the config0 contribution repository and name it “digital-ocean”
  2. Register the repository with Config0 platform
  3. Clone your newly forked repository:
git clone https://github.com/<your username>/digital-ocean.git

Step 1: Copy OpenTofu Code

  1. Rename execgroup “template” to “doks”:
mv digital-ocean/execgroups/_config0_configs/template digital-ocean/execgroups/_config0_configs/doks
  1. Copy the sample DOKS code¹:
rm -rf digital-ocean/execgroups/_config0_configs/doks/_chrootfiles/var/tmp/terraform
cp -rp digital-ocean/OpenTofu digital-ocean/execgroups/_config0_configs/doks/_chrootfiles/var/tmp/terraform

Step 2: Create Config0 Stack

Copy Config0 Files

cp -rp digital_ocean/Config0/stack/_documentation/README.md digital_ocean/stacks/_config0_configs/template/_documentation/README.md
cp -rp digital_ocean/Config0/stack/_documentation/metadata.yml digital_ocean/stacks/_config0_configs/template/_documentation/metadata.yml 
cp -rp digital_ocean/Config0/stack/_main/run.py digital_ocean/stacks/_config0_configs/template/_main/run.py

The run.py file serves as the stack’s entry point and workflow file. The file structure includes:

  • Section 1: Declares variables for the stack (mostly corresponding to OpenTofu variables)
  • Section 2: Specifies the execgroup and changes execgroup to __<repo_owner>__:::digital-ocean::doks
  • Section 3: References the stack responsible for creating inputs/outputs and executing OpenTofu code
  • Section 4: Initializes all stack attributes
  • Section 5: Specifies values to upload as secrets to AWS SSM Parameter Store
  • Section 6: Sets the timeout for OpenTofu execution
  • Section 7: Initializes the OpenTofu helper with specific parameters
  • Section 8: Maps and adds additional keys
  • Section 9: Specifies keys to display on the SaaS UI
  • Section 10: Finalizes the stack results
Full Example Code
from config0_publisher.terraform import TFConstructor

def run(stackargs):

    # instantiate authoring stack
    stack = newStack(stackargs)

    # Section 1:
    # Add variables for the stack (many fetched from OpenTofu variables)
    stack.parse.add_required(key="doks_cluster_name",
                             tags="tfvar,db",
                             types="str")

    stack.parse.add_required(key="do_region",
                             tags="tfvar,db",
                             types="str",
                             default="lon1")

    stack.parse.add_optional(key="doks_cluster_version",
                             tags="tfvar,db",
                             types="str",
                             default="1.29.1-do.0")

    stack.parse.add_optional(key="doks_cluster_pool_size",
                             tags="tfvar",
                             types="str",
                             default="s-1vcpu-2gb-amd")

    stack.parse.add_optional(key="doks_cluster_pool_node_count",
                             tags="tfvar",
                             types="int",
                             default="1")

    stack.parse.add_optional(key="doks_cluster_autoscale_min",
                             tags="tfvar",
                             types="int",
                             default="1")

    stack.parse.add_optional(key="doks_cluster_autoscale_max",
                             tags="tfvar",
                             types="int",
                             default="3")

    # Section 2:
    # Declare execution groups - for simplicity we alias "tf_execgroup"
    # the execgroup must be fully qualified <repo_owner>:::<repo_name>::<execgroup_name>
    stack.add_execgroup("config0-publish:::do::doks",
                        "tf_execgroup")

    # Section 3:
    # Add substack - for OpenTofu it will almost always be config0-publish:::tf_executor
    stack.add_substack("config0-publish:::tf_executor")

    # Section 4:
    # Initialize Variables in stack
    stack.init_variables()
    stack.init_execgroups()
    stack.init_substacks()

    # Section 5:
    # For sensitive upload to ssm parameter store which will automatically expire/remove object
    ssm_obj = {
        "DIGITALOCEAN_TOKEN":stack.inputvars["DO_TOKEN"],
        "DIGITALOCEAN_ACCESS_TOKEN":stack.inputvars["DO_TOKEN"]
    }

    # Section 6:
    # if timeout exceeds 600, then it will use codebuild to execute tf
    # otherwise, if less than 600 seconds, it will use a lambda function
    # which is faster since lambda coldstarts is less than codebuild
    stack.set_variable("timeout",600)

    # Section 7:
    # use the terraform constructor (helper)
    # but this is optional
    tf = TFConstructor(stack=stack,
                       execgroup_name=stack.tf_execgroup.name,
                       provider="do",
                       ssm_obj=ssm_obj,
                       resource_name=stack.doks_cluster_name,
                       resource_type="doks")

    # Section 8:
    # keys to map and include in db fields
    tf.include(maps={"cluster_id": "id",
                     "doks_version": "version"})

    # Section 9:
    # keys to publish and display in SaaS UI
    tf.output(keys=["doks_version",
                    "do_region",
                    "service_subnet",
                    "urn",
                    "vpc_uuid",
                    "endpoint"])

    # Section 10:
    # Finalize the tf_executor
    stack.tf_executor.insert(display=True,
                             **tf.get())

    # Section 11:
    # return results
    return stack.get_results()

Create Config0 Documentation

  • Create a README file for the stack with a detailed description and input variables
  • Generate metadata for the stack, specifying a release version to track changes and adding relevant tags

Step 3: Trigger Update Config0

  • Check in your infrastructure as code and corresponding stack (workflow) to the repository
    Trigger Config0 to upload
    the repository with the Config0 platform

¹ Please ensure you remove any existing backend configuration, as the helpers will automatically generate the backend.tf file.