3. Infra-as-code (Terraform)

To deploy the solution to the cloud there are several ways that we can choose. In this case, we will use terraform as an Infra-as-code.

Terraform is an open source infrastructure as code (IaC) software tool that allows DevOps engineers to programmatically provision the physical resources an application requires to run.

Infrastructure as code is an IT practice that manages an application's underlying IT infrastructure through programming. This approach to resource allocation allows developers to logically manage, monitor and provision resources -- as opposed to requiring that an operations team manually configure each required resource.

Terraform users define and enforce infrastructure configurations by using a JSON-like configuration language called HCL (HashiCorp Configuration Language). HCL's simple syntax makes it easy for DevOps teams to provision and re-provision infrastructure across multiple cloud and on-premises data centers.

Cloud resources required for DIGIT

Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy DIGIT. The following picture shows the various key components. (AKS, Node Pools, Postgres DB, Volumes, Load Balancer)

Understand Terraform Script

  • Ideally, one would write the terraform script from the scratch using this doc.

  • Here we have already written the terraform script that one can reuse/leverage that provisions the production-grade DIGIT Infra and can be customized with the user specific configuration.

  • Clone the following DIGIT-DevOps where we have all the sample terraform scripts available for you to leverage.

git clone --branch release https://github.com/egovernments/DIGIT-DevOps.git

cd DIGIT-DevOps/infra-as-code/terraform

### You'll see the following file structure 

├── sample-azure
│       ├── main.tf
│       ├── outputs.tf
│       ├── providers.tf
│       ├── remote-state
│       │            └── main.tf
│       └── variables.tf
└── modules
     ├── db
     │    ├── aws
     │    |    ├── main.tf
     │    │    ├── outputs.tf
     │    |    └── variables.tf
     │    └── azure
     │         ├── main.tf
     │         └── variables.tf
     │── kubernetes
     │    └── azure
     │         │   	
     │         ├── main.tf
     │         ├── outputs.tf
     |         └── variables.tf
     └── storage
    	  └── azure
          ├── main.tf
     	  ├── outputs.tf
     	  └── variables.tf       

2. Change the main.tf according to your requirements,

provider "azurerm" {
  # whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
  version = "=3.10.0"
  subscription_id  = "b4e1aa53-c521-44e6-8a4d-5ae107916b5b"
  tenant_id        = "593ce202-d1a9-4760-ba26-ae35417c00cb" 
  client_id        = "${var.client_id}"
  client_secret    = "${var.client_secret}"
  features {}
}

resource "azurerm_resource_group" "resource_group" {
  name     = "${var.resource_group}"
  location = "${var.location}"
  tags = {
     environment = "${var.environment}"
  }
}

module "kubernetes" {
  source = "../modules/kubernetes/azure"
  environment = "${var.environment}"
  name = "${var.environment}"
  location = "${azurerm_resource_group.resource_group.location}"
  resource_group = "${azurerm_resource_group.resource_group.name}"
  client_id =  "${var.client_id}"
  client_secret = "${var.client_secret}"
  nodes = "${var.nodes}"
  vm_size = "Standard_A8_v2"
  ssh_public_key = "${var.environment}"
}

module "zookeeper" {
  source = "../modules/storage/azure"
  environment = "${var.environment}"
  itemCount = "3"
  disk_prefix = "zookeeper"
  location = "${azurerm_resource_group.resource_group.location}"
  resource_group = "${module.kubernetes.node_resource_group}"
  storage_sku = "Premium_LRS"
  disk_size_gb = "5"
  
}

module "kafka" {
  source = "../modules/storage/azure"
  environment = "${var.environment}"
  itemCount = "3"
  disk_prefix = "kafka"
  location = "${azurerm_resource_group.resource_group.location}"
  resource_group = "${module.kubernetes.node_resource_group}"
  storage_sku = "Standard_LRS"
  disk_size_gb = "50"
  
}
module "es-master" {
  source = "../modules/storage/azure"
  environment = "${var.environment}"
  itemCount = "3"
  disk_prefix = "es-master"
  location = "${azurerm_resource_group.resource_group.location}"
  resource_group = "${module.kubernetes.node_resource_group}"
  storage_sku = "Premium_LRS"
  disk_size_gb = "2"
  
}
module "es-data-v1" {
  source = "../modules/storage/azure"
  environment = "${var.environment}"
  itemCount = "2"
  disk_prefix = "es-data-v1"
  location = "${azurerm_resource_group.resource_group.location}"
  resource_group = "${module.kubernetes.node_resource_group}"
  storage_sku = "Premium_LRS"
  disk_size_gb = "50"
  
}

module "postgres-db" {
  source = "../modules/db/azure"
  server_name = "${var.environment}"
  resource_group = "${module.kubernetes.node_resource_group}"  
  sku_cores = "2"
  location = "${azurerm_resource_group.resource_group.location}"
  sku_tier = "B_Gen5_1"
  storage_mb = "51200"
  backup_retention_days = "7"
  administrator_login = "${var.db_user}"
  administrator_login_password = "${var.db_password}"
  ssl_enforce = false
  db_name = "${var.environment}"
  environment= "${var.environment}"
  db_version = "${var.db_version}"
  
}

3. Declare the variables in variables.tf

variable "environment" {

    default = "Environment_Name"

}

variable "resource_group" {

    default = "Resource_Group_Name”

}

variable "location" {

    default = "SouthIndia"

}

variable "nodes" {
    default = "4"
}

variable "db_version" {
    default = "11"
}

variable "db_user" {
    default = "demo"
}

variable "db_password" {

}

variable "client_id" {

}

variable "client_secret" {

}

Save the file and exit the editor

4. Create a Terraform output file (output.tf) and Paste the following code into file.

output "zookeeper_storage_ids" {

  value = "${module.zookeeper.storage_ids}"

}

output "kafka_storage_ids" {

  value = "${module.kafka.storage_ids}"

}

output "es_master_storage_ids" {

  value = "${module.es-master.storage_ids}"

}

output "es_data_v1_storage_ids" {

  value = "${module.es-data-v1.storage_ids}"

}

5. Terraform Execution: Infrastructure Resources Provisioning

Once you have finished declaring the resources, you can deploy all resources.

  1. terraform init: command is used to initialize a working directory containing Terraform configuration files.

  2. terraform plan: command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

  3. terraform apply: command executes the actions proposed in a Terraform plan to create or update infrastructure.

After the complete creation, you can see resources in your Azure account.

Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.

Let's begin to run the terraform scripts to provision infra required to Deploy DIGIT on Azure.

  1. First CD into the following directory and run the following command 1-by-1 and watch the output closely.

##### create the DIGIT Infra #####

cd DIGIT-DevOps/infra-as-code/terraform/sample-azure

terraform init

terraform plan

terraform apply out.plan

6. Test the Kubernetes cluster

The Kubernetes tools can be used to verify the newly created cluster.

1. Once terraform apply execution is done it will generate the Kubernetes configuration file or you can get it from Terraform state.

2. Set an environment variable so that kubectl picks up the correct config.

export KUBECONFIG=./kube_config_file_name 

3. Verify the health of the cluster.

kubectl get nodes

You should see the details of your worker nodes, and they should all have a status Ready, as shown in the following image:

Last updated

​All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.