Since there are many DIGIT services and the development code is part of various git repos, you need to understand the concept of cicd-as-service which is open sourced. This page also guides you through the process of creating a CI/CD pipeline.
To integrate any new service/app to the CI/CD below is the starting point:
Once the desired service is ready for the integration: decide the service name, type of service, whether DB migration is required or not. While you commit the source code of the service to the git repository, the following file should be added with the relevant details which are mentioned as below:
Build-config.yml –It is present under the build directory in the repository
This file contains the below details which are used for creating the automated Jenkins pipeline job for your newly created service.
While integrating a new service/app, the above content needs to be added in the build-config.yml file of that app repository. For example: If we are on-boarding a new service called egov-test, then the build-config.yml should be added as mentioned below.
If a job requires multiple images to be created (DB Migration) then it should be added as below,
Note - If a new repository is created then the build-config.yml should be created under the build folder and then the config values are added to it.
The git repository URL is then added to the Job Builder parameters
When the Jenkins Job => job builder is executed the CI Pipeline gets created automatically based on the above details in build-config.yml. Eg: egov-test job will be created under builds/DIGIT-OSS/core-services folder in Jenkins because the “build-config was edited under core-services” And it should be the “master” branch only. Once the pipeline job is created, it can be executed for any feature branch with build parameters (Specifying which branch to be built – master or any feature branch).
As a result of the pipeline execution, the respective app/service docker image will be built and pushed to the Docker repository.
On repo provide read-only access to GitHub user (created while ci/cd deployment )
****
The Jenkins CI pipeline is configured and managed 'as code'.
Job Builder – Job Builder is a Generic Jenkins job which creates the Jenkins pipeline automatically which are then used to build the application, create the docker image of it and push the image to docker repository. The Job Builder job requires the git repository URL as a parameter. It clones the respective git repository and reaads the build/build-config.yml file for each git repository and uses it to create the service build job.
check and add your repo ssh url in ci.yaml
If git repository ssh URL is available build the Job-Builder Job
If git repository URL is not available please check and add the same team.
The services deployed and managed on a Kubernetes cluster in cloud platforms like AWS, Azure, GCP, OpenStack, etc. Here, we use helm charts to manage and generate the Kubernetes manifest files and use them for further deployment to respective Kubernetes cluster. Each service is created as charts which will have the below-mentioned files in it.
To deploy a new service, we need to create the helm chart for it. The chart should be created under the charts/helm directory in Digit-DeOps repository.
_Name of the service? test-service Application Type? NA Kubernetes health checks to be enabled? Yes Flyway DB migration container necessary? No Expose service to the internet? Yes Route through API gateway [zuul] No Context path? hello_
The generated chart will have the following files.
This chart can also be modified further based on user requirements.
The Deployment of manifests to the Kubernetes cluster is made very simple and easy. We have Jenkins Jobs for each state and environment-specific. We need to provide the image name or the service name in the respective Jenkins deployment job.
Enter a caption for this image (optional)
Enter a caption for this image (optional)
The deployment Jenkins job internally performs the following operations,
Reads the image name or the service name given and finds the chart that is specific to it.
Generates the Kubernetes manifests files from the chart using helm template engine.
Execute the deployment manifest with the specified docker image(s) to the Kubernetes cluster.
CI/CD setup
GitHub Organization account
Fork the belo repo's to your GitHub Organization account 1. https://github.com/egovernments/DIGIT-DevOps and 2. https://github.com/egovernments/CIOps
Go lang (version 1.13.X)
AWS account with the admin access to provision EKS Service, you can always subscribe to free AWS account to learn the basics and try, but there is a limit to what is offered as free, for this demo you need to have a commercial subscription to the EKS service, if you want to try out for a day or two, it might cost you about Rs 500 - 1000. (Note: Post the Demo, for the internal folks, eGov will provide a 2-3 hrs time bound access to eGov's AWS account based on the request and available number of slots per day)
Install kubectl on your local machine that helps you interact with the kubernetes cluster
Install Helm that helps you package the services along with the configurations, envs, secrets, etc into a kubernetes manifests
Install terraform version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph and also it helps to destroy the cluster at one go.
Install AWS CLI on your local machine so that you can use aws cli commands to provision and manage the cloud resources on your account.
Install AWS IAM Authenticator that helps you authenticate your connection from your local machine so that you should be able to deploy DIGIT services.
Use the AWS IAM User credentials provided for the Terraform (Infra-as-code) to connect with your AWS account and provision the cloud resources.
You'll get a Secret Access Key and Access Key ID. Save them safely.
Open the terminal and Run the following command you have already installed the AWS CLI and you have the credentials saved. (Provide the credentials and you can leave the region and output format as blank)
The above will create the following file In your machine as /Users/.aws/credentials
Terraform helps you build a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy CI/CD.
The following is the resource graph that we are going to provision using terraform in a standard way so that every time and for every env, it'll have the same infra.
EKS Control Plane (Kubernetes Master)
Work node group (VMs with the estimated number of vCPUs, Memory)
EBS Volumes (Persistent Volumes)
VPCs (Private network)
Users to access, deploy and read-only
Ideally, one would write the terraform script from the scratch using this doc.
Here we have already written the terraform script that provisions the production-grade DIGIT Infra and can be customized with the specified configuration.
Let's Clone the DIGIT-DevOps GitHub repo where the terraform script to provision EKS cluster is available and below is the structure of the files.
In here, you will find the main.tf under each of the modules that has the provisioning definition for resources like EKS cluster, and Storage, etc. All these are modularized and reacts as per the customized options provided.
Example:
VPC Resources:
VPC
Subnets
Internet Gateway
Route Table
EKS Cluster Resources:
IAM Role to allow EKS service to manage other AWS services
EC2 Security Group to allow networking traffic with EKS cluster
EKS Cluster
EKS Worker Nodes Resources:
IAM role allowing Kubernetes actions to access other AWS services
EC2 Security Group to allow networking traffic
Data source to fetch latest EKS worker AMI
AutoScaling Launch Configuration to configure worker instances
AutoScaling Group to launch worker instances
Storage Module
Configuration in this directory creates EBS volume and attaches it together.
The following main.tf with create s3 bucket to store all the state of the execution to keep track
The following main.tf contains the detailed resource definitions that need to be provisioned, please have a look at it.
Dir: DIGIT-DevOps/Infra-as-code/terraform/egov-cicd
You can define your configurations in variables.tf and provide the env specific cloud requirements so that using the same terraform template you can customize the configurations.
Following are the values that you need to mention in the following files, the blank ones will be prompted for inputs while execution.
****variables.tf
Use this URL https://keybase.io/ to create your own PGP key, this will create both public and private key in your machine, upload the public key into the keybase account that you have just created, and give a name to it and ensure that you mention that in your terraform. This allows to encrypt all the sensitive information.
Example user keybase user in eGov case is "egovterraform" needs to be created and has to uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
you can use this portal to Decrypt your secret key. To decrypt PGP Message, Upload the PGP Message, PGP Private Key and Passphrase.
Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.
Let's begin to run the terraform scripts to provision infra required to Deploy DIGIT on AWS.
First CD into the following directory and run the following command 1-by-1 and watch the output closely.
Upon Successful execution following resources gets created which can be verified by the command "terraform output"
s3 bucket: to store terraform state.
Network: VPC, security groups.
IAM users auth: using keybase to create admin, deployer, the user. Use this URL https://keybase.io/ to create your own PGP key, this will create both public and private key in your machine, upload the public key into the keybase account that you have just created, and give a name to it and ensure that you mention that in your terraform. This allows to encrypt all the sensitive information.
Example user keybase user in eGov case is "egovterraform" needs to be created and has to uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
you can use this portal to Decrypt your secret key. To decrypt PGP Message, Upload the PGP Message, PGP Private Key and Passphrase.
EKS cluster: with master(s) & worker node(s).
Storage(s): for es-master, es-data-v1, es-master-infra, es-data-infra-v1, zookeeper, kafka, kafka-infra.
Use this link to get the kubeconfig from EKS to get the kubeconfig file and being able to connect to the cluster from your local machine so that you should be able to deploy DIGIT services to the cluster.
Finally, Verify that you are able to connect to the cluster by running the following command
Whola! All set and now you can go Deploy Jenkins...
Post infra setup (Kubernetes Cluster), We start with deploying the Jenkins and kaniko-cache-warmer.
Sub Domain to expose CI/CD URL
GitHub Oauth App
Docker hub account details (username and password)
SSL Certificate for the sub-domain
Prepare an <ci.yaml> master config file and <ci-secrets.yaml>, you can name this file as you wish which will have the following configurations.
credentials, secrets (You need to encrypt using sops and create a ci-secret.yaml separately)
Check and Update ci-secrets.yaml ****details (like githuh Oauth app clientId and clientSecret, GitHub user details gitReadSshPrivateKey and gitReadAccessToken etc..)
To create Jenkins namespace mark this flag true
Add your env's kubconfigs under kubConfigs like https://github.com/egovernments/DIGIT-DevOps/blob/release/deploy-as-code/helm/environments/ci-demo-secrets.yaml#L12
KubeConfig env's name and deploymentJobs name from ci.yaml should be the same
Update the CIOps and DIGIT-DevOps repo name with your forked repo name and provide read-only access to github user to those repo's.
SSL Certificate for the sub-domain
You have launch the Jenkins, Same you can access through your sub-domain which you configured in ci.yaml