Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Before you proceed with the DIGIT Deployment on AWS
Install AWS CLI on your local machine so that you can use aws cli commands to provision and manage the cloud resources on your account.
Install AWS IAM Authenticator that helps you authenticate your connection from your local machine so that you should be able to deploy DIGIT services.
When you have the command-line-access configured, everything is set for you to proceed with the terraform to provision the DIGIT Infra-as-code.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The Amazon Elastic Kubernetes Service (EKS) is one of the AWS services for deploying, managing and scaling any distributed and containerized workloads, here we can provision the EKS cluster on AWS from the ground up using terraform (infra-as-code) and then deploy the DIGIT platform services as config-as-code using Helm.
Know about EKS: https://www.youtube.com/watch?v=SsUnPWp5ilc
Know what is terraform: https://youtu.be/h970ZBgKINg
AWS account with the admin access to provision EKS Service, you can always subscribe to free AWS account to learn the basics and try, but there is a limit to what is offered as free, for this demo you need to have a commercial subscription to the EKS service, if you want to try out for a day or two, it might cost you about Rs 500 - 1000.
Note: Post the Demo, for the eGov internal folks, you can request for the AWS access for 4 hrs time bound access to eGov's training AWS account upon on the request and available number of slots per day)
Install kubectl on your local machine that helps you interact with the kubernetes cluster
Install Helm that helps you package the services along with the configurations, envs, secrets, etc into a kubernetes manifests
Install terraform version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph and also it helps to destroy the cluster at one go.
If you already have a different version of the terraform version running install tfswitch that would allow you to have multiple terraform version in the same machine and toggle between the desired versions.
Setup AWS Account
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Provision infra for DIGIT on AWS using Terraform
The Amazon Elastic Kubernetes Service (EKS) is one of the AWS services for deploying, managing, and scaling any distributed and containerized workloads, here we can provision the EKS cluster on AWS from ground up and using an automated way (infra-as-code) using terraform and then deploy the DIGIT Services config-as-code using Helm.
Know about EKS: https://www.youtube.com/watch?v=SsUnPWp5ilc
Know what is terraform: https://youtu.be/h970ZBgKINg
Complete DIGIT Installation step-by-step Instructions across various Infra types like Public & Private Clouds
While would have helped you to get your hands dirty and build the Kubernetes cluster on a local/single VM instance, which you can consider for either local development, or to understand the details involved in infra and deployment.
DIGIT is a and open source eGov stack, depending on the scale and performance running DIGIT on production requires advanced capabilities like HA, DRS, autoscaling, resiliency, etc.. most of these capabilities are provided out of the box by the commercial clouds like AWS, Google, Azure, VMware, OpenStack, etc.. and also the private clouds like NIC and few SDCs implemented clouds, all these cloud providers provide the kubernetes-as-a-managed-service that makes the entire infra setup and management seamless and automated, like infra-as-code, config-as-code.
Before we jump into the supported cloud providers, it is important to know DIGIT is completely cloud agnostic, be it a commercial clouds or on premise, the differentiator is just that when the cloud provider provides kubernetes as a managed service or not. In case of managed services like , etc.. we do not need to provision and manage the kubernetes cluster components from the ground up, just the working knowledge of the kubernetes is enough. In the absence of managed kubernetes service, we need to first create the kubernetes cluster itself out of available/required no of VMs and then ensure we manage the cluster apart from running the actual workloads. To get more understanding on kubernetes and managed kubernetes services, please go through the following pre-read.
Know the basics of Kubernetes:
Know the commands
Know kubernetes manifests:
Know how to manage env values, secrets of any service deployed in kubernetes
Know how to port forward to a pod running inside k8s cluster and work locally
Know sops to secure your keys/creds:
Unlike quickstart, full installation requires state/user-specific configurations ready before proceeding with the deployment.
You need to have the fully qualified DNS (URL) (Should not be dummy)
Persistent storage depending on the cloud you are using for the Kafka, ES, etc.
Either a standalone or a hosted PostGres DB above v11.x
MDMS with the master data like Roles, Access, Actions, tenants, etc. Sample is
Gov services specific Configs like persister, searcher configs etc. Sample is
GeoLocation provider configs (Google Location API), SMS Gateway, Payment Gateway, etc.
Choose your cloud and follow the Instruction to set up a Kubernetes cluster before moving on to the Deployment.
Post infra setup (Kubernetes Cluster), the deployment involves 2 stages and 2 modes. Check out the stages first and then the modes. As part of a sample exercise, we will deploy the PGR module. However, deployment steps are similar. The prerequisites have to be configured accordingly.
each service global, local env variables
Number of replicas/scale of individual services (Depending on whether dev or prod)
mdms, config repos (Master Data, ULB, Tenant details, Users, etc)
sms g/w, email g/w, payment g/w
GMap key (In case you are using Google Map services in your PGR, PT, TL, etc)
S3 Bucket for Filestore
URL/DNS on which the DIGIT will be exposed
SSL Certificate for the above URL
End-points configs (Internal/external)
Stage 2: Run the digit_setup deployment script and simply answer the questions that it asks.
All Done, wait and watch for 10 min, you'll have the DIGIT setup completed and the application will be running on the given URL.
Essentially, DIGIT deployment means that we need to generate Kubernetes manifests for each individual service. We use the tool called Helm, which is an easy, effective and customizable packaging and deployment solution. So depending on where and which env you initiate the deployment there are 2 modes that you can deploy.
From local machine - whatever we are trying in this sample exercise so far.
Post-deployment - the application is now accessible from the configured domain.
To try out PGR employee login - Create a sample tenant, city, user to login and assign LME employee role using the seed script.
By now we have successfully completed the DIGIT setup on the cloud. Use the URL that you mentioned in your env.yaml Eg: https://mysetup.digit.org and create a grievance to ensure the PGR module deployed is working fine. Refer to the product documentation below for the steps.
Credentials:
Citizen: You can use your default mobile number (9999999999) to sign in using the default Mobile OTP 123456.
Employee: Username: GRO and password: eGov@4321
Post grievance creation and assignment of the same to LME, capture the screenshot of the same and share it to ensure your setup is working fine.
Post validating the PGR functionality share the API response of the following request to assess the correctness of successful DIGIT PGR Deployment.
Finally, clean up the DIGIT Setup if you wish, using the following command. This will delete the entire cluster and other cloud resources that were provisioned for the DIGIT Setup.
All Done, we have successfully created infra on the cloud, deployed DIGIT, bootstrapped DIGIT, performed a transaction on PGR and finally destroyed the cluster.
Annexures:
s
All content on this page by is licensed under a .
Stage 1: Prepare an <, you can provide any name to this file. The file has the following configurations and this env file needs to be in line with your cluster name.
credentials, secrets (You need to encrypt using and create a <env>-secret.yaml separately)
Advanced: From CI/CD System like Jenkins - Depending on how you want to set up your CI/CD and the expertise the steps will vary, however you can find how we have set up CI/CD on Jenkins and the pipelines are created automatically without any manual intervention.
Run the of the egov-user service from Kubernetes cluster to your localhost. This gives you access to egov-user service directly and you can now interact with the API directly.
Ensure you have the postman to run the following seed data API. If not, on your local machine.
All content on this page by is licensed under a .
If you have any question please write to us
Kindly use the appropriate discussion category and labels to address the issues better.
To deploy the solution to the cloud there are several ways that we can choose. In this case, we will use terraform as an Infra-as-code.
Terraform is an open source infrastructure as code (IaC) software tool that allows DevOps engineers to programmatically provision the physical resources an application requires to run.
Infrastructure as code is an IT practice that manages an application's underlying IT infrastructure through programming. This approach to resource allocation allows developers to logically manage, monitor and provision resources -- as opposed to requiring that an operations team manually configure each required resource.
Terraform users define and enforce infrastructure configurations by using a JSON-like configuration language called HCL (HashiCorp Configuration Language). HCL's simple syntax makes it easy for DevOps teams to provision and re-provision infrastructure across multiple cloud and on-premises data centers.
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy DIGIT. The following picture shows the various key components. (EKS, Worker Nodes, PostGres DB, EBS Volumes, Load Balancer)
Considering the above deployment architecture, the following is the resource graph that we are going to provision using terraform in a standard way so that every time and for every env, it'll have the same infra.
EKS Control Plane (Kubernetes Master)
Work node group (VMs with the estimated number of vCPUs, Memory)
EBS Volumes (Persistent Volumes)
RDS (PostGres)
VPCs (Private network)
Users to access, deploy and read-only
(Optional) Create your own keybase key before you run the terraform
Use this URL https://keybase.io/ to create your own PGP key, this will create both public and private key in your machine, upload the public key into the keybase account that you have just created, and give a name to it and ensure that you mention that in your terraform. This allows to encrypt all the sensitive information.
Example user keybase user in eGov case is "egovterraform" needs to be created and has to uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
you can use this portal to Decrypt your secret key. To decrypt PGP Message, Upload the PGP Message, PGP Private Key and Passphrase.
Clone the DIGIT-DevOps GitHub repo where the terraform sample DIGIT template scripts to provision EKS cluster is available and below is the structure of the files.
Ideally, one would write the terraform script from the scratch using this doc.
Here we have already written the terraform script that one can reuse/leverage that provisions the production-grade DIGIT Infra and can be customized with the user specific configuration.
Clone the following DIGIT-DevOps where we have all the sample terraform scripts available for you to leverage.
In here, you will find the main.tf under each of the modules that has the provisioning definition for DIGIT resources like EKS cluster, Network, RDS, and Storage, etc. All these are modularized and reacts as per the customized options provided. Follow the below steps to configure your terraform and run
Create Terraform backend to specify the location of the backend Terraform state file on S3 and the DynamoDB table used for the state file locking. This step is optional.
Remote state is simply storing that state file remotely, rather than on your local filesystem. In a enterprise project and/or if Terraform is used by a team, it is recommended to setup and use remote state.
The following main.tf will create s3 bucket to store all the state of the execution to keep track.
Setting up the VPC, Subnets, Security Groups, etc.
Amazon EKS requires subnets must be in at least two different availability zones.
Create AWS VPC (Virtual Private Cloud).
Create two public and two private Subnets in different availability zones.
Create Internet Gateway to provide internet access for services within VPC.
Create NAT Gateway in public subnets. It is used in private subnets to allow services to connect to the internet.
Create Routing Tables and associate subnets with them. Add required routing rules.
Create Security Groups and associate subnets with them. Add required routing rules.
Create EKS cluster. Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service. For example, EKS will create an Auto Scaling Groups for each instance group if you use managed nodes.
Setting up the IAM Roles and Policies for EKS: EKS requires a few IAM Roles with relevant Policies to be pre-defined to operate correctly.
IAM Role: Create Role with the needed permissions that Amazon EKS will use to create AWS resources for Kubernetes clusters and interact with AWS APIs.
IAM Policy: Attach the trusted Policy (AmazonEKSClusterPolicy
) which will allow Amazon EKS to assume and use this role.
The following main.tf contains the detailed resource definitions that need to be provisioned, please have a look at it.
Navigate to the directory: DIGIT-DevOps/Infra-as-code/terraform/sample-aws
You can define your configurations in variables.tf and provide the env specific cloud requirements so that using the same terraform template you can customize the configurations.
Following are the values that you need to replace in the following files, the blank ones will be prompted for inputs while execution.
Once you have finished declaring the resources, you can deploy all resources.
terraform init
: command is used to initialize a working directory containing Terraform configuration files.
terraform plan
: command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
terraform apply
: command executes the actions proposed in a Terraform plan to create or update infrastructure.
After the complete creation, you can see resources in your AWS account.
Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.
Let's begin to run the terraform scripts to provision infra required to Deploy DIGIT on AWS.
First CD into the following directory and run the following command 1-by-1 and watch the output closely.
Upon Successful execution following resources gets created which can be verified by the command "terraform output"
s3 bucket: to store terraform state.
Network: VPC, security groups.
2. Use this link to get the kubeconfig from EKS to get the kubeconfig file and being able to connect to the cluster from your local machine so that you should be able to deploy DIGIT services to the cluster.
3. Finally, Verify that you are able to connect to the cluster by running the following command
To destroy previously-created infrastructure with Terraform, run below command: But you can do this step after deployed DIGIT
terraform destroy
: command is a convenient way to destroy all remote objects managed by a particular Terraform configuration.
Now we have all the deployments configs ready, you can now run the following command and provide the necessary details asked and this interactive installer, it will take care of the rest.
What you can try deploy is:
DIGIT's core platform services
PGR
TL (Trade License)
PT (Property Tax)
WS (Water & Sewerage)
etc
All DIGIT services are packaged using helm charts Installing Helm
kubectl is a CLI to connect to the kubernetes cluster from your machine
Install CURL for making api calls
Install Visualstudio IDE Code for better code/configuration editing capabilities
Install Postman to run digit bootstrap scripts
Run the egov-deployer golang script from the DIGIT-Devops repo
All Done, wait and watch for 10 min, you'll have the DIGIT setup completed and the application will be running on the given URL.
Note:
If you do not have your domain yet, you can edit the host file entries and map the nginx-ingress-service loadbalancer id like the below
When you find it, add following lines to the hosts file, save and close the file.
aws-load-balancer-id digit.try.com
If you have the GoDaddy like account and a DNS records edit access you can map the load balancer id to desired DNS Create cname record with load balancer id and domain.
You can now test the Digit application status in command prompt/terminal by using the below command.
Note: Initially pgr-services would be in crashloopbackoff state, but after performing below Post Deployment Steps pgr-services will start running.
Post-deployment, now the application is accessible from the configured domain.
To try out employee login, Lets create a sample tenant, city, user to login and assign LME employee role through the seed script
We have to do the kubectl port-forwarding of the egov-user service running from Kubernetes cluster to your localhost, this will now give you access to egov-user service directly and interact with the API directly.
2. Seed the sample data
Ensure you have the postman to run the following seed data API, if not Install postman on your local
Import the following postman collection into the postman and run it, this will have the seed data that enable sample test users and localization data.
To test the Kubernetes operations through kubectl from your local machine, please execute the below commands.
You have successfully completed the DIGIT Infra, Deployment setup and Installed a DIGIT - PGR module.
Use the below credentials to login into the complaint section
Username: GRO
Password: eGov@4321
City: CITYA
By now we have successfully completed the digit setup on cloud, use the URL that you mentioned in your env.yaml Eg: https://mysetup.digit.org and create a grievance to ensure the PGR module deployed is working fine. Refer the below product documentation for the steps.
Credentials:
Citizen: You can use your default mobile number (9999999999) to sign in using the default Mobile OTP 123456.
Employee: Username: GRO and password: eGov@4321
Post grievance creation and assignment of the same to LME, capture the screenshot of the same and share it to ensure your setup is working fine.
Post validating the PGR functionality share the API response of the following request to assess the correctness of successful DIGIT PGR Deployment.
Finally, cleanup the DIGIT Setup if you wish, using the following command. This will delete the entire cluster and other cloud resources that were provisioned for the DIGIT Setup.
All Done, we have successfully Created infra on Cloud, Deployed Digit, Bootstrapped DIGIT, Performed a Transaction on PGR and finally destroyed the cluster.
The Azure Kubernetes Service (AKS) is one of the Azure services for deploying, managing and scaling any distributed and containerized workloads, here we can provision the AKS cluster on Azure from the ground up using (infra-as-code) and then deploy the DIGIT platform services as config-as-code using .
Know about AKS:
Know what is terraform:
Azure subscription: If you don't have an Azure subscription, create a before you begin.
Install Azure
Configure Terraform: Follow the directions in the article,
Azure service principal: Follow the directions in the Create the service principal section in the article, . Take note of the values for the appId, displayName, password, and tenant.
Install on your local machine that helps you interact with the kubernetes cluster
Install that helps you package the services along with the configurations, envs, secrets, etc into a
All content on this page by is licensed under a .
Know the basics of Kubernetes:
Know the commands
Know kubernetes manifests:
Know how to manage env values, secrets of any service deployed in kubernetes
Know how to port forward to a pod running inside k8s cluster and work locally
Know sops to secure your keys/creds:
Post Kubernetes Cluster setup, the deployment has got 2 stages. As part of this sample exercise we can deploy PGR and show what are the various configuration required, however deployment steps are similar for all other modules too, just that the prerequisites differ depending on the feature like SMS Gateway, Payment Gateway, etc
It's important to prepare a your global deployment configuration yaml file that will contain all the necessary user specific custom values like URL, gateways, persistent storage ids, DB details etc.
Navigate to the following file in your local machine from previously cloned DevOps git repository
After cloning the repo CD into the folder DIGIT-DevOps and type the "code ." command that will open the visual editor and opens all the files from the repo DIGIT-DevOps
Here you need to replace the following as per your values
URL that you want to access DIGIT
SMS gateway to receive otp, transaction mobile notification, etc
MDMS, Config repo url, here is where you provide master data, tenants and various user/role access details.
GMap key for the location service
Payment gateway, in case you use PT, TL, etc
Update your credentials and sensitive data in the secret file as per your details.
SOPS expects an encryption key to use it to encrypt/decrypt a specified plain text and keep the details secured, there are couple of option which you can use it to generate the encryption key
You have to fork the following repos that contains the master data and default configs which you would customize as per your specific implementation later point in time. Like (Master Data, ULB, Tenant details, Users, etc) to your respective github account.
New github user should be enabled to access the earlier forked repo's
Update the deployment configs for the below as per your specification
Number of replicas/scale of each individual services (Depending on whether dev or prod load)
You must update sms g/w, email g/w, payment g/w details for the notification and payment gateway services, etc.
Update the config, mdmd github repos wherever marked
Update GMap key (In case you are using Google Map services in your PGR, PT, TL, etc)
URL/DNS on which the DIGIT will be exposed
SSL Certificate for the above URL
Any specific endpoints configs (Internal/external)
Annexures:
s
All content on this page by is licensed under a .
Provision infra for DIGIT on Azure using Terraform
manages your hosted Kubernetes environment. AKS allows you to deploy and manage containerized applications without container orchestration expertise. AKS also enables you to do many common maintenance operations without taking your app offline. These operations include provisioning, upgrading, and scaling resources on demand.
All content on this page by is licensed under a .
To deploy the solution to the cloud there are several ways that we can choose. In this case, we will use terraform as an Infra-as-code.
Terraform is an open source infrastructure as code () software tool that allows DevOps engineers to programmatically provision the physical resources an application requires to run.
Infrastructure as code is an IT practice that manages an application's underlying IT infrastructure through programming. This approach to resource allocation allows developers to logically manage, monitor and provision resources -- as opposed to requiring that an operations team manually configure each required resource.
Terraform users define and by using a JSON-like configuration language called HCL (HashiCorp Configuration Language). HCL's simple syntax makes it easy for DevOps teams to provision and re-provision infrastructure across multiple cloud and on-premises data centers.
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy DIGIT. The following picture shows the various key components. (AKS, Node Pools, Postgres DB, Volumes, Load Balancer)
Here we have already written the terraform script that one can reuse/leverage that provisions the production-grade DIGIT Infra and can be customized with the user specific configuration.
Save the file and exit the editor
Once you have finished declaring the resources, you can deploy all resources.
terraform init
: command is used to initialize a working directory containing Terraform configuration files.
terraform plan
: command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
terraform apply
: command executes the actions proposed in a Terraform plan to create or update infrastructure.
After the complete creation, you can see resources in your Azure account.
Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.
Let's begin to run the terraform scripts to provision infra required to Deploy DIGIT on Azure.
First CD into the following directory and run the following command 1-by-1 and watch the output closely.
6. Test the Kubernetes cluster
The Kubernetes tools can be used to verify the newly created cluster.
1. Once terraform apply execution is done it will generate the Kubernetes configuration file or you can get it from Terraform state.
2. Set an environment variable so that kubectl picks up the correct config.
3.
Verify the health of the cluster.
You should see the details of your worker nodes, and they should all have a status Ready, as shown in the following image:
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Clone the following repo (If not already done as part of Infra setup), you may need to and then run it to your machine.
Update the deployment config file with your details, you can use the following template
Important: You have to add the aws volume_ids and zone in that you got as a terraform output (Kafka, Zk, elasticsearch, etc)
credentials, secrets (You need to encrypt using and create a separately)
Option 1: Generate PGP keys
Option 2: when you want to use AWS cloud provider.
Once you generate your encryption key, create a .sops.yaml configuration file under the /helm directory of the cloned repo to define which keys are used for which specific file. refer to the sops for info.
Note: For demo purpose, you can use the as it is without sops configuration, but make sure you update your specific details like Git SSH, URL etc. When you decide to push these configurations into any git or public space, please make sure you follow the sops configuration mentioned in this article to encrypt your secrets.
both the , and repos into your GitHub account
Once you fork the repos into your GitHub account, Create a , and generate ssh authentication key( and .
Add the ssh private key which you generated in the previous step to under the git-sync section.
Modify the services git-Sync repo and branch with your fork repo and branch in
You must create one private s3 Bucket for Filestore and one public bucket for logos and add the bucket details respectively and create an IAM user with the s3 bucket access. Add IAM user details to .
All content on this page by is licensed under a .
Ideally, one would write the terraform script from the scratch using this .
Clone the following where we have all the sample terraform scripts available for you to leverage.
2. Change the according to your requirements,
3. Declare the variables in
4. Create a Terraform output file () and Paste the following code into file.
All content on this page by is licensed under a .
Coming soon ...
On National Informatica Cloud
Coming soon .....
Running Kubernetes on-premise gives a cloud-native experience on SDC when it comes to Deploying DIGIT.
Whether States have their own on-premise data centre or have decided to forego the various managed cloud solutions, there are a few things one should know when getting started with on-premise K8s.
One should be familiar with Kubernetes and the control plane consists of the Kube-apiserver, Kube-scheduler, Kube-controller-manager and an ETCD datastore. For managed cloud solutions like Google’s Kubernetes Engine (GKE) or Azure’s Kubernetes Service (AKS) it also includes the cloud-controller-manager. This is the component that connects the cluster to the external cloud services to provide networking, storage, authentication, and other feature support.
To successfully deploy a bespoke Kubernetes cluster and achieve a cloud-like experience on SDC, one need to replicate all the same features you get with a managed solution. At a high-level this means that we probably want to:
Automate the deployment process
Choose a networking solution
Choose the right storage solution
Handle security and authentication
Let us look at each of these challenges individually, and we’ll try to provide enough of an overview to aid you in getting started.
Using a tool like an ansible can make deploying Kubernetes clusters on-premise trivial.
When deciding to manage your own Kubernetes clusters, we need to set up a few proof-of-concept (PoC) clusters to learn how everything works, perform performance and conformance tests, and try out different configuration options.
After this phase, automating the deployment process is an important if not necessary step to ensure consistency across any clusters you build. For this, you have a few options, but the most popular are:
kubeadm: a low-level tool that helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices
kubespray: an ansible playbook that helps deploy production-ready clusters
If you already using ansible, kubespray is a great option otherwise we recommend writing automation around kubeadm using your preferred playbook tool after using it a few times. This will also increase your confidence and knowledge in the tooling surrounding Kubernetes.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.