Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This document aims to put together all the items which will enable to come up with a proper training plan for a partner team who will be working on the eDCR service used for the plan scrutiny.
Below listed are the technical skillsets that are required to work on eDCR service. It is expected that the team planning on attending training is well versed with the mentioned technologies before they attend eGov training sessions.
Java and REST APIS
Postgres
Maven
Spring framework
Basics of 2D CAD Drawings
Git
Postman
YAML/JSON
Strong working knowledge on Linux, command, VM Instances, networking, storage
The session, cache, and tokens handling (Redis-server)
Understanding of VM types, Linux OS types, LoadBalancer, VPC, Subnets, Security Groups, Firewall, Routing, DNS
Experience setting up CI like Jenkins and create pipelines
Artifactory - Nexus, verdaccio, DockerHub, etc
Experience in setting up SSL certificates and renewal
Gitops, Git branching, PR review process. Rules, Hooks, etc.
JBoss Wildfly, Apache, Nginx, Redis and Postgres
Trainees are expected to have laptops/ desktops configured as mentioned below with all the software required to run the eDCR service application
Laptop for hands-on training with 16GB RAM and OS preferably Ubuntu
All developers need to have Git ids
Install VSCode/IntelliJ/Eclipse
Install Git
Install JDK 8 update 112 or higher
Install maven v3.2.x
Install PostgreSQL v9.6
Postman
Install LibreCAD
Application Server JBoss Wildfly v11.x
There are knowledge assets available in the Net for general items and eGov assets for DIGIT services. Here you can find references to each of the topics of importance. It is mandated the trainees do a self-study of all the software mentioned in the prerequisites using the reference materials shared.
Topic
Reference
Preparedness Check
Git
Do you have a Git account?
Do you know how to clone a repository, pull updates, push updates?
Do you know how to give a pull request and merge the pull request?
Postgres
How to create database and set up privileges?
How to add index on table?
How to use aggregation functions in psql?
Postman
Call a REST API from Postman with proper payload and show the response
Setup any service locally(MDMS or user service has least dependencies) and check the API’s using postman
REST APIs
What are the principles to be followed when making a REST API?
When to use POST and GET?
How to define the request and response parameters?
JSON
How to write filters to extract specific data using jsonPaths?
YAML
How to read an API contract using swagger?
Maven
What is POM?
What is the purpose of maven clean install and how to do it?
What is the difference between version and SNAPSHOT?
eDCR Approach Guide
How to configuring and customizing the eDCR engine as per the state/city rules and regulations.
eDCR Service setup
Overall Flow of eDCr service, design and setup process
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
An overview of the prerequisites to setup DIGIT and some of the key capabilities to understand before provisioning the infra and deploy DIGIT.
DIGIT is the largest urban governance platform built for billions and billions of transactions between citizens and the state govt through various municipal services/integration. The platform is built with key capabilities like scale, speed, integration, configurable, customizable, extendable, multi-tenanted, security, etc. Here, we shall discuss the key requirements and capabilities.
Before proceeding to set up DIGIT, it is essential to know some of the key technical details about DIGIT, like architecture, tech stack, how it is packaged and deployed on various infrastructure. Some of these details are explained in the previous sections. Below are some of the key capabilities to know about DIGIT as a platform.
DIGIT is a collection of various services built as RESTFul APIs with OpenAPI standard
DIGIT is built as MSA (Microservices Architecture)
DIGIT Services are Packaged as containers and Deployed as Docker Images.
DIGIT is deployed on Kubernetes which abstracts any Cloud/Infra suitable and standardised for DIGIT deployment.
DIGIT Deployment configuration, customization is done through Helm Charts.
Kubernetes cluster setup is done through code like terraform/ansible suitably.
The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface description for REST APIs, which allows both humans and computers to discover and understand the capabilities of a service without requiring access to source code, additional documentation, or inspection of network traffic. When properly defined via OpenAPI, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. Similar to what interface descriptions have done for lower-level programming, the OpenAPI Specification removes guesswork in calling a service.
Microservices are nothing but breaking big beast into smaller units that can independently be developed, enhanced and scaled as a categorized and layered stack that gives better control over each component of an application that exists in its own container, independently managed and updated. This means that developers can build applications from multiple components and program each component in the language best suited to its function, rather than having to choose a single less-than-ideal language to use for everything. Optimizing software all the way down to the components of the application helps you increase the quality of your products. No time and resources are wasted managing the effects updating one application have on another.
Comparatively the best infra choice for running a microservices application architecture is application containers. Containers encapsulate a lightweight runtime environment for the application, presenting a consistent environment that can follow the application from the developer's desktop to testing to final production deployment, and you can run containers on cloud infra with physical or virtual machines.
As most modern software developers can attest, containers have provided us with dramatically more flexibility for running cloud-native applications on physical and virtual infrastructure. Kubernetes allows you to deploy cloud-native applications anywhere and manage them exactly as you like everywhere. For more details refer to the above link that explains various advantages of Kubernetes.
Kubernetes, the popular container orchestration system, is used extensively. However, it can become complex: you have to handle all of the objects (ConfigMaps, pods, etc.), and would also have to manage the releases. Both can be accomplished with Kubernetes Helm. It is a Kubernetes package manager designed to easily package, configure, and deploy applications and services onto Kubernetes clusters in a standard way, this helps the ecosystem to adopt the standard way of deployment and customization.
For being successful in the DIGIT Setup, here are certain requirements that need to be ascertained:
Scheduled Job Handling
API Gateway
Container Management
Resource and Storage Handling
Fault Tolerance
Load Balancing
Distributed Metrics
Application Runtime and Packaging
App Deployment
Configuration Management
Service Discovery
CI / CD
Virtualization
Hardware & Storage
OS & Networking
SSL Configuration
Infra-as-code
Dockers
DNS Configuration
GitOps
SecOps
On-premise/private cloud accounts
Interface to access and provision required infra
In the case of SDC, NIC or private DC, it'll be VPN to an allocated VLAN
SSH access to the VMs/machines
Infra Skills
Public cloud
Managed Kubernetes service like AKS or EKS or GKE on Azure, AWS and GCP respectively
Private Clouds (SDC, NIC)
Clouds like VMware, OpenStack, Nutanix and more, may or may not have Kubernetes as a managed service. If yes we may have to estimate only the worker nodes depending on the number of ULBs and DIGIT's municipal services that you opt.
In the absence of the above, you have to provision Kubernetes cluster from the plain VMs as per the general Kubernetes setup instruction and add worker nodes.
Operations Skills
Understanding of Linux, containers, VM Instances, Load Balancers, Security Groups/Firewalls, Nginx, DB Instance, Data Volumes
Experience of Kubernetes, Docker, Jenkins, Helm, Infra-as-code, Terraform
Experience in DevOps/SRE practice on Microservices and modern infrastructure
Provisioning the Kubernetes Cluster in any of the
Private State datacenter (SDC) or
National Cloud (NIC)
Setting up the persistent disk volumes to attach to DIGIT backbone stateful containers like
ZooKeeper
Kafka
Elastic Search
Setting up the Postgres DB
On a public cloud, provision a Postgres RDS like instance.
Private cloud, provision a Postgres DB on a VM with the backup, HA/DRS
Preparing Deployment configuration for required DIGIT services using Helm Templates from the InfraOps like the following
Preparing DIGIT Service helm templates to deploy on Kubernetes cluster
K8s Secrets
K8s ConfigMaps
Environment variables of each microservices
Deploy the stable released version of DIGIT and Required services
Setting up Jenkins Job to build, bake images and deploy the components for the rolling updates
Kubernetes has changed the way organizations deploy and run their applications, and it has created a significant shift in mindsets. While it has already gained a lot of popularity and more and more organizations are embracing the change, running Kubernetes in production requires care.
Although Kubernetes is open source and does it have its share of vulnerabilities, making the right architectural decision can prevent a disaster from happening.
You need to have a deep level of understanding of how Kubernetes works and how to enforce the best practices so that you can run a secure, highly available, production-ready Kubernetes cluster.
Although Kubernetes is a robust container orchestration platform, the sheer level of complexity with multiple moving parts overwhelms all administrators.
That is the reason why Kubernetes has a large attack surface, and, therefore, hardening of the cluster is an absolute must if you are to run Kubernetes in production.
There are a massive number of configurations in K8s, and while you can configure a few things correctly, the chances are that you might misconfigure a few things.
I will describe a few best practices that you can adopt if you are running Kubernetes in production. Let’s find out.
If you are running your Kubernetes cluster in the cloud, consider using a managed Kubernetes cluster such as Google Kubernetes Engine or Azure Kubernetes Service.
A managed cluster comes with some level of hardening already in place, and, therefore, there are fewer chances to misconfigure things. A managed cluster also makes upgrades easy, and sometimes automatic. It helps you manage your cluster with ease and provides monitoring and alerting out of the box.
Since Kubernetes is open source, vulnerabilities appear quickly and security patches are released regularly. You need to ensure that your cluster is up to date with the latest security patches and for that, add an upgrade schedule in your standard operating procedure.
Having a CI/CD pipeline that runs periodically for executing rolling updates for your cluster is a plus. You would not need to check for upgrades manually, and rolling updates would cause minimal disruption and downtime; also, there would be fewer chances to make mistakes.
That would make upgrades less of a pain. If you are using a managed Kubernetes cluster, your cloud provider can cover this aspect for you.
It goes without saying that you should patch and harden the operating system of your Kubernetes nodes. This would ensure that an attacker would have the least attack surface possible.
You should upgrade your OS regularly and ensure that it is up to date.
Kubernetes post version 1.6 has role-based access control (RBAC) enabled by default. Ensure that your cluster has this enabled.
You also need to ensure that legacy attribute-based access control (ABAC) is disabled. Enforcing RBAC gives you several advantages as you can now control who can access your cluster and ensure that the right people have the right set of permissions.
RBAC does not end with securing access to the cluster by Kubectl clients but also by pods running within the cluster, nodes, proxies, scheduler, and volume plugins.
Only provide the required access to service accounts and ensure that the API server authenticates and authorizes them every time they make a request.
Running your API server on plain HTTP in production is a terrible idea. It opens your cluster to a man in the middle attack and would open up multiple security holes.
Always use transport layer security (TLS) to ensure that communication between Kubectl clients and the API server is secure and encrypted.
Be aware of any non-TLS ports you expose for managing your cluster. Also ensure that internal clients such as pods running within the cluster, nodes, proxies, scheduler, and volume plugins use TLS to interact with the API server.
While it might be tempting to create all resources within your default namespace, it would give you tons of advantages if you use namespaces. Not only will it be able to segregate your resources in logical groups but it will also enable you to define security boundaries to resources in namespaces.
Namespaces logically behave as a separate cluster within Kubernetes. You might want to create namespaces based on teams, or based on the type of resources, projects, or customers depending on your use case.
After that, you can do clever stuff like defining resource quotas, limit ranges, user permissions, and RBAC on the namespace layer.
Avoid binding ClusterRoles to users and service accounts, instead provide them namespace roles so that users have access only to their namespace and do not unintentionally misconfigure someone else’s resources.
Cluster Role and Namespace Role Bindings
You can use Kubernetes network policies that work as firewalls within your cluster. That would ensure that an attacker who gained access to a pod (especially the ones exposed externally) would not be able to access other pods from it.
You can create Ingress and Egress rules to allow traffic from the desired source to the desired target and deny everything else.
Kubernetes Network Policy
By default, when you boot your cluster through kubeadm, you get access to the kubernetes-admin
config file which is the superuser for performing all activities within your cluster.
Do not share this file within your team, instead, create a separate user account for every user and only provide the right accesses to them. Bear in mind that Kubernetes does not maintain an internal user directory, and therefore, you need to ensure that you have the right solution in place to create and manage your users.
Once you create the user, you can generate a private key and a certificate signing request for the user, and Kubernetes would sign and generate a CA cert for the user.
You can then securely share the CA certificate with the user. The user can then use the certificate within kubectl to authenticate with the API server securely.
Configuring User Accounts
You can provide granular access to user and service accounts with RBAC. Let us consider a typical organization where you can have multiple roles, such as:
Application developers — These need access only to a namespace and not the entire cluster. Ensure that you provide them with access only to deploy their applications and troubleshoot them only within their namespace. You might want application developers with access to spin only ClusterIP services and might wish to grant permissions only to network administrators to define ingresses for them.
Network administrators — You can provide network admins access to networking features such as ingresses, and privileges to spin up external services.
Cluster administrators — These are sysadmins whose main job is to administer the entire cluster. These are the only people that should have cluster-admin access and only the amount that is necessary for them to do their roles.
The above is not etched in stone, and you can have a different organization policy and different roles, but the only thing to keep in mind here is that you need to enforce the principle of least privilege.
That means that individuals and teams should have only the right amount of access they need to perform their job, nothing less and nothing more.
It does not stop with just issuing separate user accounts and using TLS to authenticate with the API server. It is an absolute must that you frequently rotate and issue credentials to your users.
Set up an automated system that periodically revokes the old TLS certificates and issues new ones to your user. That helps as you don’t want attackers to get hold of a TLS cert or a token and then make use of it indefinitely.
A bootstrap token, for example, needs to be revoked as soon as you finish with your activity. You can also make use of a credential management system such as HashiCorp Vault which can issue you with credentials when you need them and revoke them when you finish with your work.
Imagine a scenario where an externally exposed web application is compromised, and someone has gained access to the pod. In that scenario, they would be able to access the secrets (such as private keys) and target the entire system.
The way to protect from this kind of attack is to have a sidecar container that stores the private key and responds to signing requests from the main container.
In case someone gets access to your login microservice, they would not be able to gain access to your private key, and therefore, it would not be a straightforward attack, giving you valuable time to protect yourself.
Partitioned Approach
The last thing you would want as a cluster-admin is a situation where a poorly written microservice code that has a memory leak can take over a cluster node causing the Kubernetes cluster to crash. That is an extremely important and generally ignored area.
You can add a resource limit and requests on the pod level as a developer or the namespace as an administrator. You can use resource quotas to limit the amount of CPU, memory, or persistent disk a namespace can allocate.
It can also allow you to limit the number of pods, volumes, or services you can spin within a namespace. You can also make use of limit ranges that provide you with a minimum and maximum size of resources every unit of the cluster within the namespace can request.
That will limit users from seeking an unusually large amount of resources such as memory and CPU.
Specifying a default resource limit and request on a namespace level is generally a good idea as developers aren’t perfect. If they forget to specify a limit, then the default limit and requests would protect you from resource overrun.
The ETCD datastore is the primary source of data for your Kubernetes cluster. That is where all cluster information and the expected configuration is stored.
If someone gains access to your ETCD database, all security measures will go down the drain. They will have full control of your cluster, and they can do what they want by modifying state in your ETCD datastore.
You should always ensure that only the API server can communicate with the ETCD datastore and only through TLS using a secure mutual auth. You can put your ETCD nodes behind a firewall and block all traffic except the ones originating from the API server.
Do not use the master ETCD for any other purpose but for managing your Kubernetes cluster and do not provide any other component access to the ETCD cluster.
Enable encryption of your secret data at rest. That is extremely important so that if someone gets access to your ETCD cluster, they should not be able to view your secrets by just doing a hex dump of your secrets.
Containers run on nodes and therefore have some level of access to the host file system, however, the best way to reduce the attack surface is to architect your application in such a way that containers do not need to run as root.
Use pod security policies to restrict the pod to access HostPath volumes as that might result in getting access to the host filesystem. Administrators can use a restrictive pod policy so that anyone who gained access to one pod should not be able to access another pod from there.
Audit loggers are now a beta feature in Kubernetes, and I recommend you make use of it. That would help you troubleshoot and investigate what happened in case of an attack.
As a cluster-admin dealing with a security incident, the last thing you would want is that you are unaware of what exactly happened with your cluster and who has done what.
Remember that the above are just some general best practices and they are not exhaustive. You are free to adjust and make changes based on your use case and ways of working for your team.
This page discusses the infrastructure requirements for DIGIT services. It also explains why DIGIT services are containerised and deployed on Kubernetes.
DIGIT Infra is abstracted to Kubernetes which is an open-source containers orchestration platform that helps in abstracting a variety of infra types that are being available across each state, like Physical, VMs, on-premises clouds(VMware, OpenStack, Nutanix, etc.), commercial clouds (Google, AWS, Azure, etc.), SDC and NIC into a standard infra type. Essentially it unifies various infra types into a standard and single type of infrastructure and thus DIGIT becomes multi-cloud supported, portable, extensible, high-performant and scalable containerized workloads and services. This facilitates both declarative configuration and automation. Kubernetes services, eco-system, support and tools are widely available.
Kubernetes as such is a set of components that designated jobs of scheduling, controlling, monitoring
3 or more machines running one of:
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Container Linux (tested with 1576.4.0)
4 GB or more of RAM per machine (any less will leave little room for your apps)
2 CPUs or more
3 or more machines running one of:
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Container Linux (tested with 1576.4.0)
2 GB or more of RAM per machine (any less will leave little room for your apps)
2 CPUs or more
Full network connectivity between all machines in the cluster (public or private network is fine)
Unique hostname, MAC address, and product_uuid for every node. Click here for more details.
Certain ports are open on your machines. See below for more details
Swap disabled. You MUST disable swap in order for the Kubelet to work properly
product_uuid
Are Unique for Every NodeYou can get the MAC address of the network interfaces using the command ip link
or ifconfig -a
The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
It is very likely that hardware devices will have unique addresses, although some virtual machines may have identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique to each node, the installation process may fail.
If you have more than one network adapter, and your Kubernetes components are not reachable on the default route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
TCP
Inbound
6443*
Kubernetes API server
TCP
Inbound
2379-2380
etcd server client API
TCP
Inbound
10250
kubelet API
TCP
Inbound
10251
kube-scheduler
TCP
Inbound
10252
kube-controller-manager
TCP
Inbound
10255
Read-only kubelet API
TCP
Inbound
10250
kubelet API
TCP
Inbound
10255
Read-only kubelet API
TCP
Inbound
30000-32767
NodePort Services**
** Default port range for NodePort Services.
Any port numbers marked with * are overridable, so you will need to ensure any custom ports you provide are also open.
Systems
Specification
Spec/Count
Comment
User Accounts/VPN
Dev, UAT and Prod Envs
3
User Roles
Admin, Deploy, ReadOnly
3
OS
Any Linux (preferably Ubuntu/RHEL)
All
Kubernetes as a managed service or VMs to provision Kubernetes
Managed Kubernetes service with HA/DRS
(Or) VMs with 2 vCore, 4 GB RAM, 20 GB Disk
If no managed k8s
3 VMs/env
Dev - 3 VMs
UAT - 3VMs
Prod - 3VMs
Kubernetes worker nodes or VMs to provision Kube worker nodes.
VMs with 4 vCore, 16 GB RAM, 20 GB Disk / per env
3-5 VMs/env
DEV - 3VMs
UAT - 4VMs
PROD - 5VMs
Storage (NFS/iSCSI)
Storage with backup, snapshot, dynamic inc/dec
1 TB/env
Dev - 1000 GB
UAT - 800 GB
PROD - 1.5 TB
VM Instance IOPS
Max throughput 1750 MB/s
1750 MS/s
Storage IOPS
Max throughput 1000 MB/s
1000 MB/s
Internet Speed
Min 100 MB - 1000MB/Sec (dedicated bandwidth)
Public IP/NAT or LB
Internet-facing 1 public ip per env
3
3 Ips
Availability Region
VMs from the different region is preferable for the DRS/HA
at least 2 Regions
Private vLan
Per env all VMs should within private vLan
3
Gateways
NAT Gateway, Internet Gateway, Payment and SMS gateway, etc
1 per env
Firewall
Ability to configure Inbound, Outbound ports/rules
Managed DataBase
(or) VM Instance
Postgres 12 above Managed DB with backup, snapshot, logging.
(Or) 1 VM with 4 vCore, 16 GB RAM, 100 GB Disk per env.
per env
DEV - 1VMs
UAT - 1VMs
PROD - 2VMs
CI/CD server self-hosted (or) Managed DevOps
Self Hosted Jenkins: Master, Slave (VM 4vCore, 8 GB each)
(Or) Managed CI/CD: NIC DevOps or AWS CodeDeploy or Azure DevOps
2 VMs (Master, Slave)
Nexus Repo
Self-hosted Artifactory Repo (Or) NIC Nexus Artifactory
1
DockerRegistry
DockerHub (Or) SelfHosted private docker reg
1
Git/SCM
GitHub (Or) Any Source Control tool
1
DNS
main domain & ability to add more sub-domain
1
SSL Certificate
NIC managed (Or) SDC managed SSL certificate per URL
2 URLs per env
This document aims to put together all the items which will enable us to come up with a proper training plan for a partner team that will be working on the DIGIT platform.
Below listed are the technical skill sets that are required to work on the DIGIT stack. It is expected the team planning on attending training is well versed with the mentioned technologies before they attend eGov training sessions.
Open API Contract - Swagger2.0
YAML/JSON
Postman
Postgres
Java and REST APIS
Basics of Elasticsearch
Maven
Springboot
Kafka
Zuul
NodeJS, ReactJS
WordPress
PHP
Understanding of the microservice architecture.
Experience of AWS, Azure, GCP, NIC Cloud.
Strong working knowledge of Linux, command, VM Instances, networking, storage.
To create Kubernetes cluster on AWS, Azure, GCP on NIC Cloud.
Kubectl installation & commands (apply, get, edit, describe k8s objects)
Terraform for infra-as-code for cluster or VM provisioning.
Understanding of VM types, Linux OS types, LoadBalancer, VPC, Subnets, Security Groups, Firewall, Routing, DNS)
Experience setting up CI like Jenkins and create pipelines.
Deployment strategies - Rolling updates, Canary, Blue/Green.
Scripting - Shell, Groovy, Python and GoLang.
Experience in Baking Containers and Dockers.
Artifactory - Nexus, Verdaccio, DockerHub, etc.
Experience on Kubernetes ingress, setting up SSL certificates and renewal
Understanding on Zuul gateway
Gitops, Git branching, PR review process. Rules, Hooks, etc.
Experience in Helm, packaging and deploying.
JBoss Wildfly, Apache, Nginx, Redis and Postgres.
Trainees are expected to have laptops/ desktops configured as mentioned below with all the software required to run the DIGIT application
Laptop for hands-on training with 16GB RAM and OS preferably Ubuntu
All developers need to have Git ids
Install VSCode/IntelliJ/Eclipse
Postman
There are knowledge assets available in the Net for general items and eGov assets for DIGIT services. Here you can find references to each of the topics of importance. It is mandated the trainees do a self-study of all the software mentioned in the prerequisites using the reference materials shared.
This section discusses the supported cloud environment for DIGIT services. It provides information on where and how DIGIT is deployed. Further, it offers guidelines on estimating the infrastructural requirements for cloud support.
Supported Cloud List
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Download the file below to view the DIGIT Rollout Program Governance structure and process details.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The Kubernetes vSphere driver contains bugs related to detaching volumes from offline nodes. See the section for more details.
When creating worker nodes for a user cluster, the user can specify an existing image. Defaults may be set in the .
Supported operating systems
Ubuntu 18.04
CoreOS
CentOS 7
Go into the VSphere WebUI, select your data centre, right-click onto it and choose “Deploy OVF Template”
Fill in the “URL” field with the appropriate URL
Click through the dialogue until “Select storage”
Select the same storage you want to use for your machines
Select the same network you want to use for your machines
Leave everything in the “Customize Template” and “Ready to complete” dialogue as it is
Wait until the VM got fully imported and the “Snapshots” => “Create Snapshot” button is not greyed out anymore.
The template VM must have the disk.enable UUID flag set to 1, this can be done using the with the following command:
Convert it to vmdk: qemu-img convert -f qcow2 -O vmdk CentOS-7-x86_64-GenericCloud.qcow2 CentOS-7-x86_64-GenericCloud.vmdk
Upload it to a Datastore of your vSphere installation
Create a new virtual machine that uses the uploaded vmdk as rootdisk.
Modifications like Network, disk size, etc. must be done in the ova template before creating a worker node from it. If user clusters have dedicated networks, all user clusters, therefore, need a custom template.
Kubernetes needs to talk to the vSphere to enable Storage inside the cluster. For this, kubernetes needs a config called cloud-config
. This config contains all details to connect to a vCenter installation, including credentials.
As this Config must also be deployed onto each worker node of a user cluster, its recommended to have individual credentials for each user cluster.
The VSphere user must have the following permissions on the correct resources
Role k8c-storage-vmfolder-propagate
Granted at VM Folder and Template Folder, propagated
Permissions
Virtual machine
Change Configuration
Add existing disk
Add new disk
Add or remove the device
Remove disk
Folder
Create folder
Delete folder
Role k8c-storage-datastore-propagate
Granted at Datastore, propagated
Permissions
Datastore
Allocate space
Low-level file operations
Role Read-only
(predefined)
Granted at …, not propagated
Datacenter
Role k8c-user-vcenter
Granted at vcentre level, not propagated
Needed to customize VM during provisioning
Permissions
VirtualMachine
Provisioning
Modify customization specification
Read customization specifications
Role k8c-user-datacenter
Granted at datacentre level, not propagated
Needed for cloning the template VM (obviously this is not done in a folder at this time)
Permissions
Datastore
Allocate space
Browse datastore
Low-level file operations
Remove file
vApp
vApp application configuration
vApp instance configuration
Virtual Machine
Change CPU count
Memory
Settings
Inventory
Create from existing
Role k8c-user-cluster-propagate
Granted at the cluster level, propagated
Needed for upload of cloud-init.iso
(Ubuntu and CentOS) or defining the Ignition config into Guestinfo (CoreOS)
Permissions
Host
Configuration
System Management
Local operations
Reconfigure virtual machine
Resource
Assign virtual machine to the resource pool
Migrate powered off the virtual machine
Migrate powered-on virtual machine
vApp
vApp application configuration
vApp instance configuration
Role k8s-network-attach
Granted for each network that should be used
Permissions
Network
Assign network
Role k8c-user-datastore-propagate
Granted at datastore/datastore cluster level, propagated
Permissions
Datastore
Allocate space
Browse datastore
Low-level file operations
Role k8c-user-folder-propagate
Granted at VM Folder and Template Folder level, propagated
Needed for managing the node VMs
Permissions
Folder
Create folder
Delete folder
Global
Set custom attribute
Virtual machine
Change Configuration
Edit Inventory
Guest operations
Interaction
Provisioning
Snapshot management
The described permissions have been tested with vSphere 6.7 and might be different for other vSphere versions.
After a node is powered-off, the Kubernetes vSphere driver doesn’t detach disks associated with PVCs mounted on that node. This makes it impossible to reschedule pods using these PVCs until the disks are manually detached in vCenter.
Upstream Kubernetes has been working on the issue for a long time now and tracking it under the following tickets:
This page explains why Kubernetes is required. It deep dives into the key benefits of using Kubernetes to run a large containerized platform like DIGIT in production environments.
Kubernetes project started in the year 2014 with . Kubernetes has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. The largest public cloud platforms , , , and now provide managed services for Kubernetes. A few years back RedHat, Mesosphere, Pivotal, VMware, Nutanix completely redesigned their implementation with Kubernetes and collaborated with the Kubernetes community for implementing the next generation container platform with incorporated key features of Kubernetes such as container grouping, overlay networking, layer 4 routing, secrets, etc. Today many organizations & technology providers adapting kubernetes at a rapid phase.
One of the fundamental design decisions which have been taken by this impeccable cluster manager is its ability to deploy existing applications that run on VMs without any changes to the application code. On the high level, any application that runs on VMs can be deployed on Kubernetes by simply containerizing its components. This is achieved by its core features; container grouping, container orchestration, overlay networking, container-to-container routing with layer 4 virtual IP based routing system, service discovery, support for running daemons, deploying stateful application components, and most importantly the ability to extend the container orchestrator for supporting complex orchestration requirements.
On very high-level Kubernetes provides a set of dynamically scalable hosts for running workloads using containers and uses a set of management hosts called masters for providing an API for managing the entire container infrastructure.
That's just a glimpse of what Kubernetes provides out of the box. In the next few sections will go through its core features and explain how it can help applications to be deployed on it in no time.
A containerized application can be deployed on Kubernetes using a deployment definition by executing a simple CLI command as follows:
One of the key features of Kubernetes is its service discovery and internal routing model provided using SkyDNS and layer 4 virtual IP based routing system. These features provide internal routing for application requests using services. A set of pods created via a replica set can be load balanced using a service within the cluster network. The services get connected to pods using selector labels. Each service will get assigned a unique IP address, a hostname derived from its name and route requests among the pods in round-robin manner. The services will even provide IP-hash based routing mechanism for applications which may require session affinity. A service can define a collection of ports and the properties defined for the given service will apply to all the ports in the same way. Therefore, in a scenario where session affinity is only needed for a given port where all the other ports required to use round-robin based routing, multiple services may need to be used.
Kubernetes services have been implemented using a component called kube-proxy. A kube-proxy instance runs in each node and provides three proxy modes: Userspace, iptables and IPVS. The current default is iptables.
In the first proxy mode: userspace, kube-proxy itself will act as a proxy server and delegate requests accepted by an iptable rule to the backend pods. In this mode, kube-proxy will operate in the userspace and will add an additional hop to the message flow.
In the second proxy mode: iptables, the kube-proxy will create a collection of iptable rules for forwarding incoming requests from the clients directly to the ports of backend pods on the network layer without adding an additional hop in the middle. This proxy mode is much faster than the first mode because of operating in the kernel space and not adding an additional proxy server in the middle.
Kubernetes services can be exposed to external networks in two main ways. The first is using node ports by exposing dynamic ports on the nodes that forward traffic to the service ports. The second is using a load balancer configured via an ingress controller which can delegate requests to the services by connecting to the same overlay network. An ingress controller is a background process which may run in a container which listens to the Kubernetes API, dynamically configure and reloads a given load balancer according to a given set of ingresses. An ingress defines the routing rules based on hostnames and context paths using services.
Once an application is deployed on Kubernetes using kubectl run
command, it can be exposed to the external network via a load balancer as follows:
The above command will create a service of load balancer type and map it to the pods using the same selector label created when the pods were created. As a result, depending on how the Kubernetes cluster has been configured a load balancer service on the underlying infrastructure will get created for routing requests for the given pods either via the service or directly.
Applications that require persisting data on the filesystem may use volumes for mounting storage devices to ephemeral containers similar to how volumes are used with VMs. Kubernetes has properly designed this concept by loosely coupling physical storage devices with containers by introducing an intermediate resource called persistent volume claims (PVCs). A PVC defines the disk size, disk type (ReadWriteOnce, ReadOnlyMany, ReadWriteMany) and dynamically links a storage device to a volume defined against a pod. The binding process can either be done in a static way using PVs or dynamically by using a persistent storage provider. In both approaches, a volume will get linked to a PV one to one and depend on the configuration given data will be preserved even if the pods get terminated. According to the disk type used multiple pods will be able to connect to the same disk and read/write.
Kubernetes provides a resource called DaemonSets for running a copy of a pod in each Kubernetes node as a daemon. Some of the use cases of DaemonSets are as follows:
A cluster storage daemon such as glusterd
, ceph
to be deployed on each node for providing persistence storage.
A log collection daemon such as fluentd
or logstash
to be run on every node for collecting container and Kubernetes component logs.
An ingress controller pod to be run on a collection of nodes for providing external routing.
One of the most difficult tasks of containerizing applications is the process of designing the deployment architecture of stateful distributed components. Stateless components can be easily containerized as they may not have a predefined startup sequence, clustering requirements, point to point TCP connections, unique network identifiers, graceful startup and termination requirements, etc. Systems such as databases, big data analysis systems, distributed key/value stores, and message brokers, may have complex distributed architectures that may require above features. Kubernetes introduced StatefulSets resource for supporting such complex requirements.
On high-level StatefulSets are similar to ReplicaSets except that it provides the ability to handle the startup sequence of pods, uniquely identify each pod for preserving its state while providing the following characteristics:
Stable, unique network identifiers.
Stable, persistent storage.
Ordered, graceful deployment and scaling.
Ordered, graceful deletion and termination.
Ordered, automated rolling updates
Containers generally use environment variables for parameterizing their runtime configurations. However, typical enterprise applications use a considerable amount of configuration files for providing static configurations required for a given deployment. Kubernetes provides a fabulous way of managing such configuration files using a simple resource called ConfigMaps without bundling them into the container images. ConfigMaps can be created using directories, files or literal values using following CLI command:
Once a ConfigMap is created, it can be mounted to a pod using a volume mount. With this loosely coupled architecture, configurations of an already running system can be updated seamlessly just by updating the relevant ConfigMap and executing a rolling update process which I will explain in one of the next sections. I might be important to note that currently ConfigMaps does not support nested folders, therefore if there are configuration files available in a nested directory structure of the application, a ConfigMap would need to be created for each directory level.
Similar to ConfigMaps Kubernetes provides another valuable resource called Secrets for managing sensitive information such as passwords, OAuth tokens, and ssh keys. Otherwise updating that information on an already running system might require rebuilding the container images.
A secret can be created for managing basic auth credentials using the following way:
Once a secret is created, it can be read by a pod either using environment variables or volume mounts. Similarly, any other type of sensitive information can be injected into pods using the same approach.
The above animated-image illustrates how application updates can be rolled out for an already running application using blue/green deployment method without having to take a system downtime. This is another invaluable feature of Kubernetes which allows applications to seamlessly roll out security updates and backwards compatible changes without much effort. If the changes are not backwards compatible, a manual blue/green deployment might need to be executed using a separate deployment definition.
This approach allows a rollout to be executed for updating a container image using a simple CLI command:
Once a rollout is executed, the status of the rollout process can be checked as follows:
Using the same CLI command kubectl set image deployment
an update can be rolled back to a previous state.
Figure 10: Kubernetes Pod Autoscaling Model
Kubernetes allows pods to be manually scaled either using ReplicaSets or Deployments. The following CLI command can be used for this purpose:
Figure 11: Helm and Kubeapps Hub
The Kubernetes community initiated a separate project for implementing a package manager for Kubernetes called Helm. This allows Kubernetes resources such as deployments, services, config maps, ingresses, etc to be templated and packaged using a resource called chart and allow them to be configured at the installation time using input parameters. More importantly, it allows existing charts to be reused when implementing installation packages using dependencies. Helm repositories can be hosted in public and private cloud environments for managing application charts. Helm provides a CLI for installing applications from a given Helm repository into a selected Kubernetes environment.
Kubernetes has been designed with over a decade of experience on running containerized applications at scale at Google. It has been already adopted by the largest public cloud vendors, technology providers and currently being embraced by most of the software vendors and enterprises as this article is written. It has even lead to the inception of the Cloud Native Computing Foundation (CNCF) in the year 2015, was the first project to graduate under CNCF, and started streamlining the container ecosystem together with other container-related projects such as CNI, Containers, Envoy, Fluentd, gRPC, Jagger, Linkerd, Prometheus, RKT and Vitess. The key reasons for its popularity and to be endorsed at such level might be its flawless design, collaborations with industry leaders, making it open-source, always being open to ideas and contributions.
This article provides DIGIT Infra overview, Guidelines for Operational Excellence while DIGIT is deployed on SDC, NIC or any commercial clouds along with the recommendations and . It helps to plan the procurement and build the necessary capabilities to deploy and implement DIGIT.
In a shared control, the state program team/partners can consider these guidelines and must provide their own control implementation to the state’s cloud infrastructure and partners for a standard and smooth operational excellence.
DIGIT strongly recommends (SRE) principles as a key means to bridge between development and operations by applying a software engineering mindset to system and IT administration topics. In general, an SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
Monitoring Tools Recommendations: Commercial clouds like , and GCP offer sophisticated monitoring solutions across various infra levels like CloudWatch and StackDriver, in the absence of such managed services to monitor we can look at various best practices and tools listed below which helps efficiently.
, , , ****
,
Segregation of duties and responsibilities.
SME and SPOCs for support along with the SLAs defined.
to manage incidents, converge and collaborate on various operational issues.
Monitoring dashboards at various levels like Infrastructure, Network and applications.
Transparency of monitoring data and collaboration between teams.
Periodic remote sync up meetings, acceptance and attendance to the meeting.
Ability to see stakeholders availability of calendar time to schedule meetings.
Periodic (weekly, monthly) summary reports of the various infra, operations incident categories.
Communication channels and synchronization on a regular basis and also upon critical issues, changes, upgrades, releases etc.
While DIGIT is deployed at state cloud infrastructure, it is essential to identify and distinguish the responsibilities between Infrastructure, Operations and Implementation partners. Identify these teams and assign SPOC, define responsibilities and Incident management is followed to visualize, track issues and manage dependencies between teams. Essentially these are monitored through dashboards and alerts are sent to the stakeholders proactively. eGov team can provide consultation and training on the need basis depending any of the below categories.
State program team - Refers to the owner for the whole DIGIT implementation, application rollouts, capacity building. Responsible for identifying and synchronizing the operating mechanism between the below teams.
Implementation partner - Refers to the DIGIT Implementation, application performance monitoring for errors, logs scrutiny, TPS on peak load, distributed tracing, DB queries analisis, etc.
Operations team - this team could be an extension of the implementation team who is responsible for DIGIT deployments, configurations, CI/CD, change management, traffic monitoring and alerting, log monitoring and dashboard, application security, DB Backups, application uptime, etc.
**State IT/Cloud team -**Refers to state infra team for the Infra, network architecture, LAN network speed, internet speed, OS Licensing and upgrade, patch, compute, memory, disk, firewall, IOPS, security, access, SSL, DNS, data backups/recovery, snapshots, capacity monitoring dashboard.
For access to the Compute Engine API, it has to be enabled at the .
The user for the Google Service Account that has to be created has to have three roles:
Compute Admin: roles/compute.admin
Service Account User: roles/iam.serviceAccountUser
Viewer: roles/viewer
If the CLI is installed, a service account can be created like follow:
type
project_id
private_key_id
private_key
client_email
client_id
auth_uri
token_uri
auth_provider_x509_cert_url
client_x509_cert_url
The private key is BASE64 containing the newlines as non-escaped strings "\n”. So to avoid the resulting troubles the machine controller expects the whole service account encoded in BASE64.
The base64 encoded secret of the service account will be passed in the field serviceAccount
of the cloudProviderSpec
of the machine deployment. The encoded secret can be entered in the UI field Service Account
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Install
Install
Install
Install 6
Install
All content on this page by is licensed under a .
During the creation of a user cluster Kubermatic creates a dedicated VM folder in the root path on the Datastore (Defined in the ). That folder will contain all worker nodes of a user cluster.
All content on this page by is licensed under a .
The above figure illustrates the high-level application deployment model on Kubernetes. It uses a resource called for orchestrating containers. A ReplicaSet can be considered as a YAML or a JSON based metadata file which defines the container images, ports, the number of replicas, activation health checks, liveness health checks, environment variables, volume mounts, security rules, etc required for creating and managing the containers. Containers are always created on Kubernetes as groups called which is again a Kubernetes metadata definition or a resource. Each pod allows sharing the file system, network interfaces, operating system users, etc among the containers using Linux namespaces, cgroups, and other kernel features. The ReplicaSets can be managed by another high-level resource called for providing features for rolling out updates and handling their rollbacks.
The third proxy mode was which is much similar to the second proxy mode and it makes use of an based virtual server for routing requests without using iptable rules. IPVS is a transport layer load balancing feature which is available in the Linux kernel based on Netfilter and provides a collection of load balancing algorithms. The main reason for using IPVS over iptables is the performance overhead of syncing proxy rules when using iptables. When thousands of services are created, updating iptable rules takes a considerable amount of time compared to a few milliseconds with IPVS. Moreover, IPVS uses a hash table for looking up the proxy rules over sequential scans with iptables. More information on the introduction of IPVS proxy mode can be found in “” presentation done by Huawei at KubeCon 2017.
Disks that support ReadWriteOnce will only be able to connect to a single pod and will not be able to share among multiple pods at the same time. However, disks that support ReadOnlyMany will be able to share among multiple pods at the same time in read-only mode. In contrast, as the name implies disks with ReadWriteMany support can be connected to multiple pods for sharing data in read and write mode. Kubernetes provides for supporting storage services available on public cloud platforms such as AWS EBS, GCE Persistent Disk, Azure File, Azure Disk and many other well-known storage systems such as NFS, Glusterfs, iSCSI, Cinder, etc.
A node monitoring daemon such as to be run on every node for monitoring the container hosts.
In the above, stable refers to preserving the network identifiers and persistent storage across pod rescheduling. Unique network identifiers are provided by using headless services as shown in the above figure. Kubernetes has provided examples of StatefulSets for deploying , and in a distributed manner.
In addition to ReplicaSets and StatefulSets Kubernetes provides two additional controllers for running workloads in the background called and . The difference between Jobs and CronJobs is that Jobs execute once and terminates whereas CronJobs get executed periodically by a given time interval similar to standard Linux cron jobs.
Deploying databases on container platforms for production usage would be a slightly difficult task than deploying applications due to their requirements for clustering, point to point connections, replication, shading, managing backups, etc. As mentioned previously StatefulSets has been designed specifically for supporting such complex requirements and there are a couple of options for running , and clusters on Kubernetes today. YouTube’s database clustering system which is now a CNCF project would be a great option for running MySQL at scale on Kubernetes with shading. By saying that it would be better to note that those options are still at very early stages and if an existing production-grade database system is available on the given infrastructure such as RDS on AWS, Cloud SQL on GCP, or on-premise database cluster it might be better to choose one of those options considering the installation complexity and maintenance overhead.
As shown in the above figure this functionality can be extended by adding another resource called against a deployment for dynamically scaling the pods based on their actual resource usage. The HPA will monitor the resource usage of each pod via the resource metrics API and inform the deployment to change the replica count of the ReplicaSet accordingly. Kubernetes uses an upscale delay and a downscale delay for avoiding thrashing which could occur due to frequent resource usage fluctuations in some situations. Currently, HPA only provides support for scaling based on CPU usage. If needed custom metrics can also be plugged in via the depending on the nature of the application.
A wide range of stable Helm charts for well-known software applications can be found in it’s and also in the central Helm server: .
[1] What is Kubernetes:
[2] Borg, Omega and Kubernetes:
[3] Kubernetes Components:
[4] Kubernetes Services:
[5] IPVS (IP Virtual Server)
[6] Introduction of IPVS Proxy Mode:
[7] Kubernetes Persistent Volumes:
[8] Kubernetes Configuration Best Practices:
[9] Customer Resources & Custom Controllers:
[10] Understanding Vitess:
[11] Skaffold, CI/CD for Kubernetes:
[12] Kaniko, Build Container Images in Kubernetes:
[13] Apache Spark 2.3 with Native Kubernetes Support
[14] Deploying Apache Kafka using StatefulSets:
[15] Deploying Apache Zookeeper using StatefulSets:
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
A Google Service Account for the platform has to be created, see . The result is a JSON file containing the fields
All content on this page by is licensed under a .
Tools/Skills
Specification
Weightage (1-5)
Yes/No
System Administration
Linux Administration, troubleshooting, OS Installation, Package Management, Security Updates, Firewall configuration, Performance tuning, Recovery, Networking, Routing tables, etc
4
Containers/Dockers
Build/Push docker containers, tune and maintain containers, Startup scripts, Troubleshooting dockers containers.
2
Kubernetes
Setup kubernetes cluster on bare-metal and VMs using kubeadm/kubespary, terraform, etc. Strong understanding of various kubernetes components, configurations, kubectl commands, RBAC. Creating and attaching persistent volumes, log aggregation, deployments, networking, service discovery, Rolling updates. Scaling pods, deployments, worker nodes, node affinity, secrets, configMaps, etc..
3
Database Administration
Setup PostGres DB, Set up read replicas, Backup, Log, DB RBAC setup, SQL Queries
3
Docker Registry
Setup docker registry and manage
2
SCM/Git
Source Code management, branches, forking, tagging, Pull Requests, etc.
4
CI Setup
Jenkins Setup, Master-slave configuration, plugins, jenkinsfile, groovy scripting, Jenkins CI Jobs for Maven, Node application, deployment jobs, etc.
4
Artifact management
Code artifact management, versioning
1
Apache Tomcat
Web server setup, configuration, load balancing, sticky sessions, etc
2
WildFly JBoss
Application server setup, configuration, etc.
3
Spring Boot
Build and deploy spring boot applications
2
NodeJS
NPM Setup and build node applications
2
Scripting
Shell scripting, python scripting.
4
Log Management
Aggregating system, container logs, troubleshooting. Monitoring Dashboard for logs using prometheus, fluentd, Kibana, Grafana, etc.
3
WordPress
Multi-tenant portal setup and maintain
2
Topic
Reference
Preparedness Check
Git
Do you have a Git account?
Do you know how to clone a repository, pull updates, push updates?
Do you know how to give a pull request and merge the pull request?
Microservice Architecture
Do you know when to create a new service?
How to access other services?
ReactJS
How to create react app?
How to create a Stateful and Stateless Component?
How to use HOC as a wrapper?
Validations at form level using React.js and Redux
Postgres
How to create a database and set up privileges?
How to add an index on a table?
How to use aggregation functions in psql?
Postman
Call a REST API from Postman with proper payload and show the response
Setup any service locally(MDMS or user service has least dependencies) and check the API’s using postman
REST APIs
What are the principles to be followed when making a REST API?
When to use POST and GET?
How to define the request and response parameters?
Kafka
How to push messages on Kafka topic?
How does the consumer group work?
What are partitions?
Docker and Kubernetes
How to edit deployment configuration?
How to read logs?
How to go inside a Kubernetes pod?
How to create a docker file using a base image?
How to port-forward the pod to the local port?
JSON
How to write filters to extract specific data using jsonPaths?
YAML
How to read an API contract using swagger?
Zuul
What does Zuul do?
Maven
What is POM?
What is the purpose of maven clean install and how to do it?
What is the difference between the version and SNAPSHOT?
Springboot
How does Autowiring work in spring?
How to write a consumer/producer using spring Kafka?
How to make an API call to another service using restTemplate?
How to execute queries using JDBC Template?
Elastic search
How to write basic queries to fetch data from elastic search index?
Wordpress
DIGIT Architecture
What comes as part of core service, business service and municipal services?
How to calls APIs from one service in another service?
DIGIT Core Services
Which are the core services in the DIGIT framework?
DIGIT DevOps
DIGIT MDMS
How to read a master date from MDMS?
How to add new data in an existing Master?
Where is the MDMS data stored?
DIGIT UI Framework
How to add a new component to the framework?
How to use an existing component?
DSS
National Informatica Cloud
Details coming soon...
Looking at these requirements for a DevOps engineer, it is pretty clear that one should have a variety of skills to manage DIGIT DevOps.
Anyone involved in hiring DevOps engineers will realize that it is hard to find prospective candidates who have all the skills listed in this section.
Ultimately, the skill set needed for an incoming DevOps engineer would depend on the current and short-term focus of the operations team. A brand new team that is rolling out a new software service would require someone with good experience in infrastructure provisioning, deployment automation, and monitoring. A team that supports a stable product might require the service of an expert who could migrate home-grown automation projects to tools and processes around standard configuration management and continuous integration tools.
DevOps practice is a glue between engineering disciplines. An experienced DevOps engineer would end up working in a very broad swath of technology landscapes that overlaps with software development, system integration, and operations engineering.
An experienced DevOps engineer would be able to describe most of the technologies that is described in the following sections. This is a comprehensive list of DevOps skills for comparing one’s expertise and a reference template for acquiring new skills.
In theory, a template like this should be used only for assessing the current experience of a prospective hire. The needed skills can be picked up on the jobs that demand deep knowledge in certain areas. Therefore, the focus should be to hire smart engineers who have a track record of picking up new skills, rolling out innovative projects at work, and contributing to reputed open-source projects.
A DevOps engineer should have a good understanding of both classic (data centre-based) and cloud infrastructure components, even if the team has a dedicated infrastructure team.
This involves how real hardware (servers and storage devices) are racked, networked, and accessed from both the corporate network and the internet. It also involves the provisioning of shared storage to be used across multiple servers and the methods available for that, as well as infrastructure and methods for load balancing.
Hypervisors.
Virtual machines.
Object storage.
Running virtual machines on PC and Mac (Vagrant, VMWare, etc.).
Cloud infrastructure has to do with core cloud computing and storage components as they are implemented in one of the popular virtualization technologies (VMWare or OpenStack). It also involves the idea of elastic infrastructure and options available to implement it.
Network layers
Routers, domain controllers, etc.
Networks and subnets
IP address
VPN
DNS
Firewall
IP tables
Network access between applications (ACL)
Networking in the cloud (i.e., Amazon AWS)
Load balancing infrastructure and methods
Geographical load balancing
Understanding of CDN
Load balancing in the cloud
A DevOps engineer should have experience using specialized tools for implementing various DevOps processes. While Jenkins, Dockers, Kubernetes, Terraform, Ansible, and the like are known to most DevOps guys, other tools might be obscure or not very obvious (such as the importance of knowing one major monitoring tool in and out). Some tools like, source code control systems, are shared with development teams.
The list here has only examples of basic tools. An experienced DevOps engineer would have used some application or tool from all or most of these categories.
Expert-level knowledge of an SCM system such as Git or Subversion.
Knowledge of code branching best practices, such as Git-Flow.
Knowledge of the importance of checking in Ops code to the SCM system.
Experience using GitHub.
Experience using a major bug management system such as Bugzilla or Jira.
Ability to have a workflow related to the bug filing and resolution process.
Experience integrating SCM systems with the bug resolution process and using triggers or REST APIs.
Knowledge of Wiki basics.
Experience using MediaWiki, Confluence, etc.
Knowledge of why DevOps projects have to be documented.
Knowledge of how documents were organized on a Wiki-based system.
Experience building on Jenkins standalone, or dockerized.
Experience using Jenkins as a Continuous Integration (CI) platform.
CI/CD pipeline scripting using groovy
Experience with CI platform features such as:
Integration with SCM systems.
Secret management and SSH-based access management.
Scheduling and chaining of build jobs.
Source-code change based triggers.
Worker and slave nodes.
REST API support and Notification management.
Should know what artefacts are and why they have to be managed.
Experience using a standard artefacts management system such as Artifactory.
Experience caching third-party tools and dependencies in-house.
Should be able to explain configuration management.
Experience using any Configuration Management Database (CMDB) system.
Experience using open-source tools such as Cobbler for inventory management.
Ability to do both agent-less and agent-driven enforcement of configuration.
Experience using Ansible, Puppet, Chef, Cobbler, etc.
Knowledge of the workflow of released code getting into production.
Ability to push code to production with the use of SSH-based tools such as Ansible.
Ability to perform on-demand or Continuous Delivery (CD) of code from Jenkins.
Ability to perform agent-driven code pull to update the production environment.
Knowledge of deployment strategies, with or without an impact on the software service.
Knowledge of code deployment in the cloud (using auto-scaling groups, machine images, etc.).
Knowledge of all monitoring categories: system, platform, application, business, last-mile, log management, and meta-monitoring.
Status-based monitoring with Nagios.
Data-driven monitoring with Zabbix.
Experience with last-mile monitoring, as done by Pingdom or Catchpoint.
Experience doing log management with ELK.
Experience monitoring SaaS solutions (i.e., Datadog and Loggly).
To get an automation project up and running, a DevOps engineer builds new things such as configuration objects in an application and code snippets of full-blown programs. However, a major part of the work is glueing many things together at the system level on the given infrastructure. Such efforts are not different from traditional system integration work and, in my opinion, the ingenuity of an engineer at this level determines his or her real value on the team. It is easy to find cookbooks, recipes, and best practices for vendor-supported tools, but it would take experience working on diverse projects to gain the necessary skill set to implement robust integrations that have to work reliably in production.
Important system-level tools and techniques are listed here. The engineer should have knowledge about the following.
Users and groups on Linux.
Use of service accounts for automation.
Sudo commands, /etc/sudoers files, and passwordless access.
Using LDAP and AD for access management.
Remote access using SSH.
SSH keys and related topics.
SCP, SFTP, and related tools.
SSH key formats.
Managing access using configuration management tools.
Use of GPG for password encryption.
Tools for password management such as KeePass.
MD5, KMS for encryption/decryption.
Remote access with authentication from automation scripts.
Managing API keys.
Jenkins plugins for password management.
Basics of compilers such as node.js and Javac.
Make and Makefile, npm, Maven, Gradle, etc.
Code libraries in Node, Java, Python, React etc.
Build artefacts such as JAR, WAR and node modules.
Running builds from Jenkins.
Packaging files: ZIP, TAR, GZIP, etc.
Packaging for deployment: RPM, Debian, DNF, Zypper, etc.
Packaging for the cloud: AWS AMI, VMWare template, etc.
Use of Packer.
Docker and containers for microservices.
Use of artefacts repository: Distribution and release of builds; meeting build and deployment dependencies
Serving artefacts from a shared storage volume
Mounting locations from cloud storage services such as AWS S3
Artifactory as artefacts server
SCP, Rsync, FTP, and SSL counterparts
Via shared storage
File transfer with cloud storage services such as AWS S3
Code pushing using system-level file transfer tools.
Scripting using SSH libraries such as Paramiko.
Orchestrating code pushes using configuration management tools.
Use of crontab.
Running jobs in the background; use of Nohup.
Use of screen to launch long-running jobs.
Jenkins as a process manager.
Typical uses of the find, DF, DU, etc.
A comparison of popular distributions.
Checking OS release and system info.
Package management differences.
OS Internals and Commands
Typical uses of SED, AWK, GREP, TR, etc.
Scripting using Perl, Python.
Regular expressions.
Support for regular expressions in Perl and Python.
Sample usages and steps to install these tools:
NC
Netstat
Traceroute
VMStat
LSOF
Top
NSLookup
Ping
TCPDump
Dig
Sar
Uptime
IFConfig
Route
One of the attributes that helps differentiate a DevOps engineer from other members in the operations team, like sysadmins, DBAs, and operations support staff, is his or her ability to write code. The coding and scripting skill is just one of the tools in the DevOps toolbox, but it's a powerful one that a DevOps engineer would maintain as part of practising his or her trade.
Coding is the last resort when things cannot be integrated by configuring and tweaking the applications and tools that are used in an automation project.
Many times, a few lines of bash script could be the best glue code integrating two components in the whole software system. DevOps engineer should have basic shell scripting skills and Bash is the most popular right now.
If a script has to deal with external systems and components or it's more than just a few lines of command-lines and dealing with fairly complex logic, it might be better to write that script in an advanced scripting language like Python, Perl, or Ruby.
Knowledge of Python would make your life easier when dealing with DevOps applications such as Ansible, which uses Python syntax to define data structures and implement conditionals for defining configurations.
One of the categories of projects a DevOps engineer would end up doing is building dashboards. Though dashboarding features are found with most of the DevOps tools, those are specific to the application, and there will be a time when you may require to have a general-purpose dashboard with more dynamic content than just static links and text.
Another requirement is to build web UI for provisioning tools to present those as self-service tools to user groups.
In both these cases, deep web programming skills are not required. Knowledge of a web programming friendly language such as PHP and a JavaScript/CSS/HTML library like Composer would be enough to get things started. It is also important for the DevOps engineer to know the full-stack, in this case, LAMP, for building and running the web apps.
Almost every application and tool that is used for building, deploying, and maintaining software systems use configuration files. While manual reading of these files might not require any expertise, a DevOps engineer should know how config files in such formats are created and parsed programmatically.
A DevOps engineer should have a good understanding of these formats:
INI.
XML.
JSON.
YAML.
The engineer should also know how these formats are parsed in his/her favourite scripting language.
The wide acceptance of REST API as a standard to expose features that other applications can use for system integration made it a feature requirement for any application that wants to be taken seriously. The knowledge of using REST API has become an important skill for DevOps engineer.
HTTP/HTTPS: REST APIs are based on HTTP/HTTPS protocol and a solid understanding of its working is required. Knowledge of HTTP headers, status codes, and main verbs GET, POST, and PUT.
REST API basics: Normal layout of APIs defined for an application.
Curl and Wget: Command-line tools to access REST API and HTTP URLs. Some knowledge of the support available for HTTP protocol in scripting languages will be useful and that would be an indication of working with REST APIs.
Authentication methods: Cookie-based and OAuth authentication; API keys; use of If-Match and If-None-Match set of HTTP headers for updates.
API management tools: If the application you support provides an API for the users, most probably, its usage will be managed by some API Gateway tool. Though not an essential skill, experience in this area would be good if one works on the API provider side.
There was a time when mere knowledge of programming with RDBMS was enough for an application developer and system integrator to manage application data. Now with the wide adoption of Big Data platform like Hadoop and NOSQL systems to process and store data, a DevOps engineer needs varied requirements, from one project to another. Core skills are the following:
RDBMS: MySQL, Postgres, etc. knowledge of one or more is important.
Setting up and configuring PostGres: As an open-source database used with many other tools in the DevOps toolchain, consider this as a basic requirement for a DevOps engineer. If one hasn’t done this, he or she might not have done enough yet.
Running queries from a Bash script: How to run a database query via a database client from a Bash script and use the output. MySQL is a good example.
Database access from Perl/PHP/Python: All the major scripting languages provide modules to access databases and that can be used to write robust automation scripts. Examples are Perl DBI and Python’s MySQLdb module.
DB Backups: Migration, Logging, monitoring and cleanup.
Those who have built cloud infrastructure with a focus on automation and versioning should know some of these (or similar) tools:
cloud-init: Cloud-init can be used to configure a virtual machine when it is spun up. This is very useful when a node is spun up from a machine image with baseline or even application software already baked in.
AWS/Azure/GCloud CLI: If the application runs on Commercial cloud, knowledge of CLI is needed, which would be handy to put together simple automation scripts.
Terraform: HashiCorp’s Terraform is an important tool if the focus would be to provision infrastructure as code (IaaS). Using this, infrastructure can be configured independently of the target cloud or virtualization platform.
Ansible: It can be used to build machine images for a variety of virtualization technologies and cloud platforms, it is useful if the infrastructure is provisioned in a mixed or hybrid cloud environment.
In a rush to get things rolled out, one of the things left half-done is adding enough error handling in scripts. Automation scripts that are not robust can cause major production issues, which could impact the credibility of DevOps efforts itself. A DevOps engineer should be aware of the following best practices in error handling and logging:
The importance of error handling in automated scripts.
Error handling in Bash.
Error handling in Python.
Logging errors in application and system logs.
Program Execution Teams
Team Size
Roles/Actors
Proposed Composition
Timelines
Location
Program Management
2
Program Leader Program Manager
State / Deputation (External Consultants can be onboarded here if required)
Full-time
Central
Domain Experts
1-2
Domain experts for mCollect and Trade License
Senior Resources from ULBs / State Govt Depts. who can help in
interpretation of rules
propose reforms as required
From initiation till requirement finalisation/rollout
Central
Coordination + Execution Team
1 per ULB
Nodal Officers per ULB
ULB Staff (Tax Inspectors / Tax Suprintendants / Revenue Officers etc)
Full-time
Local
Technology Implementation Team
6-8
1 Program Manager 1 Sr. Developer 1 Jr. Developer 1 Tester 1 Data Migration Specialist / DBA 1 DevOps Lead 1 Business Analyst 1 Content / Documentation Expert
Outsourced
From initiation to rollout
Central
Data Preparation and Coordination (MIS)
1-2
MIS / Data / Cross-functional
Outsourced / Contract
Full-time
Central
Capacity Building / IEC
2-4
Content developers and Trainers / Process Experts
Outsourced / Contract
For 6-12 months post rollout
Central
Monitoring
4-6
Nodal Officers one per 2/3 Districts
State / Deputation
For 3-6 months post rollout
Semi-local
Help Desk and Support
4-5
Help Desk Support Analyst
Outsourced
For 12 months post rollout / as per contract
Central
Title
Responsibilities
Qualifications
Program Manager
Day to day project ownership/management and coordination within the project team as well as with State PMU
MBA / Relevant
Required 10+ years of Project/Program management experience implementing Tally / ERP Systems in large government deployments with extensive understanding of finance and accounting systems
Tech Lead / Sr. Developer
Decides all technical aspects/solutioning for the project in coordination with project plan and aligns resources to achieve project milestones
B.Tech / M.Tech / MCA
Required 8+ years of technology/solutioning experience. Preferred experience in deploying and maintaining large integrated platforms/systems
Jr. Developer / Support Engineer
Takes leadership with respect to complex technical solutions
B.Tech / M.Tech / MCA
Required 5+ years IT experience with skills such as Java, Core Java, PostgreSQL, GIT, Linux, Kibana, Elasticsearch, JIRA - Incident management, ReactJS, SpringBoot, Microservices, NodeJS
Tester / QA
Tests feature developed for completeness and accuracy with respect to defined business requirements
B.Tech / M.Tech / MCA
Required 5+ years IT experience with skills such as users stories and/use cases/requirements, execute all levels of testing (System, Integration, and Regression) JIRA - Incident management
Data Migration Specialist
Works with data collection teams to collect, clean and migrate legacy data into the platform
Candidate with strong exposure to data migration and postgreSQL with 5+ years of experience in migrating data between disparate systems. Hands-on experience with MS Excel and Macros.
DevOps Lead
In the case of the cloud: Owns all deployments and configurations needed for running the platform in the cloud
In the case of SDC: Works with SDC teams on deployments and configurations needed for running the platform in SDC
5+ years of overall experience Strong hands-on Linux experience (RHEL/CentOS, Debian/Ubuntu, Core OS) Strong hands-on Experience in managing AWS/Azure cloud instance Strong scripting skills (Bash, Python, Perl) with Automation. Strong hands-on in Git/Github, Maven, DSN/Networking Fundamentals. Strong knowledge of CI/CD Jenkins continuous integration tool Good knowledge of infrastructure automation tools (Ansible, Terraform) Good hands-on experience with Docker containers including container management platforms like Kubernetes Strong hands-on in Web Servers (Apache/NGINX) and Application Servers (JBoss/Tomcat/Spring boot)
Business Analyst
Works with domain experts to understand business and functional requirements and converts them into workable features
Tests feature developed for completeness and accuracy with respect to defined business requirements
BE / Masters / MBA
5+ years of experience in: Creating a detailed business analysis, outlining problems, opportunities and solutions for a business like Planning and monitoring, Variance Analysis, Reporting, Defining business requirements and reporting them back to stakeholders
Content / Documentation Expert
Works on documentation and preparation of project-specific artefacts
Good experience in creating and maintaining documentation (multi-lingual) for government programs. Ability to write clearly in a user-friendly manner. Proficiency in MS Office tools
Data Preparation and Coordination (MIS)
Works with ULB teams to gather data necessary for the deployment of the platform and guides them in operating the platform
BCom / MCom / Accounting Background
1+ year of experience along with a good understanding of MS Excel and Macros and good typing skills
Capacity Building / IEC Personnel
Works with ULB teams on-site on creating content and delivering training
BCom / MCom / Accounting Background
Required demonstrated experience in building the capacity of government personnel to influence change;
Proficiency with IEC content creation/delivery, theories, methods and technology in the capacity-building field, especially in supporting multi-stakeholder processes
Help Desk Support Team
The first line of support for ULB teams to answer all calls which arrive with respect to the platform
BCom / MCom / Accounting Background
Knowledge of managing helpdesk, fluent in the local language, good typing skills. Strong process and application knowledge
Since there are many DIGIT services and the development code is part of various git repos, you need to understand the concept of cicd-as-service which is open sourced. This page also guides you through the process of creating a CI/CD pipeline.
As a developer - To integrate any new service/app to the CI/CD below is the starting point:
Once the desired service is ready for the integration: decide the service name, type of service, whether DB migration is required or not. While you commit the source code of the service to the git repository, the following file should be added with the relevant details which are mentioned as below:
Build-config.yml –It is present under build directory in each repository
This file contains the below details which are used for creating the automated Jenkins pipeline job for your newly created service.
While integrating a new service/app, the above content needs to be added in the build-config.yml file of that app repository. For example: If we are on-boarding a new service called egov-test, then the build-config.yml should be added as mentioned below.
If a job requires multiple images to be created (DB Migration) then it should be added as below,
Note - If a new repository is created then the build-config.yml should be created under the build folder and then the config values are added to it.
The git repository URL is then added to the Job Builder parameters
When the Jenkins Job => job builder is executed the CI Pipeline gets created automatically based on the above details in build-config.yml. Eg: egov-test job will be created under core-services folder in Jenkins because the “build-config was edited under core-services” And it should be the “master” branch only. Once the pipeline job is created, it can be executed for any feature branch with build parameters (Specifying which branch to be built – master or any feature branch).
As a result of the pipeline execution, the respective app/service docker image will be built and pushed to the Docker repository.
The Jenkins CI pipeline is configured and managed 'as code'.
New Service Integration - Example URL - https://builds.digit.org/
Job Builder – Job Builder is a Generic Jenkins job which creates the Jenkins pipeline automatically which are then used to build the application, create the docker image of it and push the image to docker repository. The Job Builder job requires the git repository URL as a parameter. It clones the respective git repository and reads the build/build-config.yml file for each git repository and uses it to create the service build job.
Check git repository URL is available in ci.yaml****
If git repository URL is available build the Job-Builder Job
If git repository URL is not available ask the devops team to add.
The services deployed and managed on a Kubernetes cluster in cloud platforms like AWS, Azure, GCP, OpenStack, etc. Here, we use helm charts to manage and generate the Kubernetes manifest files and use them for further deployment to respective Kubernetes cluster. Each service is created as charts which will have the below-mentioned files in it.
To deploy a new service, we need to create the helm chart for it. The chart should be created under the charts/helm directory in DIGIT-DevOps repository.
We have an automatic helm chart generator utility which needs to be installed on the local machine, the utility will prompt for user inputs about the newly developed service( app specifications) for creating the helm chart. The requested chart with the configuration values (created based on the inputs provided) will be created for the user.
_Name of the service? test-service Application Type? NA Kubernetes health checks to be enabled? Yes Flyway DB migration container necessary? No Expose service to the internet? Yes Route through API gateway [zuul] No Context path? hello_
The generated chart will have the following files.
This chart can also be modified further based on user requirements.
The Deployment of manifests to the Kubernetes cluster is made very simple and easy. We have Jenkins Jobs for each state and environment-specific. We need to provide the image name or the service name in the respective Jenkins deployment job.
Enter a caption for this image (optional)
Enter a caption for this image (optional)
The deployment Jenkins job internally performs the following operations,
Reads the image name or the service name given and finds the chart that is specific to it.
Generates the Kubernetes manifests files from the chart using helm template engine.
Execute the deployment manifest with the specified docker image(s) to the Kubernetes cluster.
For provisioning Kubernetes clusters with the Azure cloud provider Kubermatic needs a service account with (at least) the Azure role Contributor
. Please follow the following steps to create a matching service account.
Login to Azure with Azure CLI az
.
This command will open in your default browser a window where you can authenticate. After you succefully logged in get your subscription ID.
Get your Tenant ID
create a new app with
Enter provider credentials using the values from step “Prepare Azure Environment” into Kubermatic Dashboard:
Client ID
: Take the value of appId
Client Secret
: Take the value of password
Tenant ID
: your tenant ID
Subscription ID
: your subscription ID
DIGIT being a containers based platform and orchestrated on kubernetes, let's discuss about some key security practices to protect the infrastructure.
Security is always a difficult subject to approach either by the lack of experience; either by the fact you should know when the level of security is right for what you have to secure.
Security is a major concern when it comes to government systems and infra. As an architect, we can consider that working with technically educated people (engineers, experts) and tools (systems, frameworks, IDE) should prevent key VAPT issues.
However, it’s quite difficult to avoid, a certain infatuation from different categories of people to try to hack the systems.
There aren’t only bug fixes in each release but also new security measures to require advantage of them, we recommend working with the newest stable version.
Updates and support could also be harder than the new features offered in releases, so plan your updates a minimum of once a quarter. Significantly simplify updates can utilize the providers of managed Kubernetes-solutions.
Use RBAC (Role-Based Access Control) to regulate who can access and what rights they need. Usually, RBAC is enabled by default in Kubernetes version 1.6 and later (or later for a few providers), but if you’ve got been updated since then and didn’t change the configuration, you ought to double-check your settings.
However, enabling RBAC isn’t enough — it still must be used effectively. within the general case, the rights to the whole cluster (cluster-wide) should be avoided, giving preference to rights in certain namespaces. Avoid giving someone cluster administrator privileges even for debugging — it’s much safer to grant rights only necessary and from time to time.
If the appliance requires access to the Kubernetes API, create separate service accounts. and provides them with the minimum set of rights required for every use case. This approach is far better than giving an excessive amount of privilege to the default account within the namespace.
Creating separate namespaces is vital because of the first level of component isolation. it’s much easier to regulate security settings — for instance, network policies — when different types of workloads are deployed in separate namespaces.
To get in-depth knowledge on Kubernetes, enrol a love demo on Kubernetes course
A good practice to limit the potential consequences of compromise is to run workloads with sensitive data on a fanatical set of machines. This approach reduces the risk of a less secure application accessing the application with sensitive data running in the same container executable environment or on the same host.
For example, a kubelet of a compromised node usually has access to the contents of secrets only if they are mounted on pods that are scheduled to be executed on the same node. If important secrets are often found on multiple cluster nodes, the attacker will have more opportunities to urge them.
Separation can be done using node pools (in the cloud or for on-premises), as well as Kubernetes controlling mechanisms, such as namespaces, taints, tolerations, and others.
Sensitive metadata — for instance, kubelet administrative credentials, are often stolen or used with malicious intent to escalate privileges during a cluster. For example, a recent find within Shopify’s bug bounty showed in detail how a user could exceed authority by receiving metadata from a cloud provider using specially generated data for one of the microservices.
The GKE metadata concealment function changes the mechanism for deploying the cluster in such how that avoids such a drag. And we recommend using it until a permanent solution is implemented.
Network Policies — allow you to control access to the network in and out of containerized applications. To use them, you must have a network provider with support for such a resource. For managed Kubernetes solution providers such as Google Kubernetes Engine (GKE), support will need to be enabled.
Once everything is ready, start with simple default network policies — for example, blocking (by default) traffic from other namespaces.
Pod Security Policy sets the default values used to start workloads in the cluster. Consider defining a policy and enabling the Pod Security Policy admission controller: the instructions for these steps vary depending on the cloud provider or deployment model used.
In the beginning, you might want to disable the NET_RAW capability in containers to protect yourself from certain types of spoofing attacks.
To improve host security, you can follow these steps:
Ensure that the host is securely and correctly configured. One way is CIS Benchmarks; Many products have an auto checker that automatically checks the system for compliance with these standards.
Monitor the network availability of important ports. Ensure that the network is blocking access to the ports used by kubelet, including 10250 and 10255. Consider restricting access to the Kubernetes API server — with the exception of trusted networks. In clusters that did not require authentication and authorization in the kubelet API, attackers used to access to such ports to launch cryptocurrency miners.
Minimize administrative access to Kubernetes hosts Access to cluster nodes should in principle be limited: for debugging and solving other problems, as a rule, you can do without direct access to the node.
Make sure that audit logs are enabled and that you are monitoring for the occurrence of unusual or unwanted API calls in them, especially in the context of any authorization failures — such entries will have a message with the “Forbidden” status. Authorization failures can mean that an attacker is trying to take advantage of the credentials obtained.
Managed solution providers (including GKE) provide access to this data in their interfaces and can help you set up notifications in case of authorization failures.
Follow these guidelines for a more secure Kubernetes cluster. Remember that even after the cluster is configured securely, you need to ensure security in other aspects of the configuration and operation of containers. To improve the security of the technology stack, study the tools that provide a central system for managing deployed containers, constantly monitoring and protecting containers and cloud-native applications.
State Data Centres with On-Premise Kubernetes Clusters
Running Kubernetes on-premise gives a cloud-native experience or SDC becomes cloud-agnostic when it comes to the experience of Deploying DIGIT.
Whether States have their own on-premise data centre, have decided to forego the various managed cloud solutions, there are few things one should know when getting started with on-premise K8s.
One should be familiar with Kubernetes and one should know that the control plane consists of the Kube-apiserver, Kube-scheduler, Kube-controller-manager and an ETCD datastore. For managed cloud solutions like Google’s Kubernetes Engine (GKE) or Azure’s Kubernetes Service (AKS) it also includes the cloud-controller-manager. This is the component that connects the cluster to the external cloud services to provide networking, storage, authentication, and other feature support.
To successfully deploy a bespoke Kubernetes cluster and achieve a cloud-like experience on SDC, one need to replicate all the same features you get with a managed solution. At a high-level this means that we probably want to:
Automate the deployment process
Choose a networking solution
Choose a storage solution
Handle security and authentication
Let us look at each of these challenges individually, and we’ll try to provide enough of an overview to aid you in getting started.
Using a tool like an ansible can make deploying Kubernetes clusters on-premise trivial.
When deciding to manage your own Kubernetes clusters, we need to set up a few proof-of-concept (PoC) clusters to learn how everything works, perform performance and conformance tests, and try out different configuration options.
After this phase, automating the deployment process is an important if not necessary step to ensure consistency across any clusters you build. For this, you have a few options, but the most popular are:
****kubeadm: a low-level tool that helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices
kubespray: an ansible playbook that helps deploy production- ready clusters
If you already using ansible, kubespray is a great option otherwise we recommend writing automation around kubeadm using your preferred playbook tool after using it a few times. This will also increase your confidence and knowledge in the tooling surrounding Kubernetes.
When designing clusters, choosing the right container networking interface (CNI) plugin can be the hardest part. This is because choosing a CNI that will work well with an existing network topology can be tough. Do you need BGP peering capabilities? Do you want an overlay network using vxlan? How close to bare-metal performance are you trying to get?
There are a lot of articles that compare the various CNI provider solutions (calico, weave, flannel, kube-router, etc.) that are must-reads like the benchmark results of Kubernetes network plugins article. We usually recommend Project Calico for its maturity, continued support, and large feature set or flannel for its simplicity.
For ingress traffic, you’ll need to pick a load-balancer solution. For a simple configuration, you can use MetalLB, but if you’re lucky enough to have F5 hardware load-balancers available we recommend checking out the K8s F5 BIG-IP Controller. The controller supports connecting your network plugin to the F5 either through either vxlan or BGP peering. This gives the controller full visibility into pod health and provides the best performance.
Kubernetes provides a number of included storage volume plugins. If you’re going on-premise you’ll probably want to use network-attached storage (NAS) option to avoid forcing pods to be pinned to specific nodes.
For a cloud-like experience, you’ll need to add a plugin to dynamically create persistent volume objects that match the user’s persistent volume claims. You can use dynamic provisioning to reclaim these volume objects after a resource has been deleted.
Pure Storage has a great example helm chart, the Pure Service Orchestrator (PSO), that provides smart provisioning although it only works for Pure Storage products.
As anyone familiar with security knows, this is a rabbit-hole. You can always make your infrastructure more secure and should be investing in continual improvements.
Including different Kubernetes plugins can help build a secure, cloud-like experience for your users
When designing on-premise clusters you’ll have to decide where to draw the line. To really harden your cluster’s security you can add plugins like:
istio: provides the underlying secure communication channel, and manages authentication, authorization, and encryption of service communication at scale
gVisor: is a user-space kernel, written in Go, that implements a substantial portion of the Linux system surface
vault: secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data
For user authentication, we recommend checking out guard which will integrate with an existing authentication provider. If you’re already using Github teams to then this could be a no-brainer.
Hope this has given you a good idea of deploying, networking, storage, and security for you to take the leap into deploying your own on-premise Kubernetes clusters. Like we mentioned above, the team will want to build proof-of-concept clusters, run conformance and performance tests, and really become experts on Kubernetes if you’re going to be using it to run DIGIT on production.
We’ll leave you with a few other things the team should be thinking of:
Externally backing up Kubernetes YAML, namespaces, and configuration files
Running applications across clusters in an active-active configuration to allow for zero-downtime updates
Running game days like deleting the CNI to measure and improve time-to-recovery
This section contains a list of documents elaborating on the key concepts aiding the deployment of the DIGIT platform.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Overview of various probes that we can setup to ensure the service deployment and the availability of the service is ensured automatically.
Determining the state of a service based on readiness, liveness, and startup to detect and deal with unhealthy situations. It may happen that if the application needs to initialize some state, make database connections, or load data before handling application logic. This gap in time between when the application is actually ready versus when Kubernetes thinks is ready becomes an issue when the deployment begins to scale and unready applications receive traffic and send back 500 errors.
Many developers assume that when basic pod setup is adequate, especially when the application inside the pod is configured with daemon process managers (e.g. PM2 for Node.js). However, since Kubernetes deems a pod as healthy and ready for requests as soon as all the containers start, the application may receive traffic before it is actually ready.
Kubernetes supports readiness and liveness probes for versions ≤ 1.15. Startup probes were added in 1.16 as an alpha feature and graduated to beta in 1.18 (WARNING: 1.16 deprecated several Kubernetes APIs. Use this migration guide to check for compatibility).
All the probe have the following parameters:
initialDelaySeconds
: number of seconds to wait before initiating liveness or readiness probes
periodSeconds
: how often to check the probe
timeoutSeconds
: number of seconds before marking the probe as timing out (failing the health check)
successThreshold
: minimum number of consecutive successful checks for the probe to pass
failureThreshold
: number of retries before marking the probe as failed. For liveness probes, this will lead to the pod restarting. For readiness probes, this will mark the pod as unready.
Readiness probes are used to let kubelet know when the application is ready to accept new traffic. If the application needs some time to initialize state after the process has started, configure the readiness probe to tell Kubernetes to wait before sending new traffic. A primary use case for readiness probes is directing traffic to deployments behind a service.
One important thing to note with readiness probes is that it runs during the pod’s entire lifecycle. This means that readiness probes will run not only at startup but repeatedly throughout as long as the pod is running. This is to deal with situations where the application is temporarily unavailable (i.e. loading large data, waiting on external connections). In this case, we don’t want to necessarily kill the application but wait for it to recover. Readiness probes are used to detect this scenario and not send traffic to these pods until it passes the readiness check again.
On the other hand, liveness probes are used to restart unhealthy containers. The kubelet periodically pings the liveness probe, determines the health, and kills the pod if it fails the liveness check. Liveness checks can help the application recover from a deadlock situation. Without liveness checks, Kubernetes deems a deadlocked pod healthy since the underlying process continues to run from Kubernetes’s perspective. By configuring the liveness probe, the kubelet can detect that the application is in a bad state and restarts the pod to restore availability.
Startup probes are similar to readiness probes but only executed at startup. They are optimized for slow starting containers or applications with unpredictable initialization processes. With readiness probes, we can configure the initialDelaySeconds
to determine how long to wait before probing for readiness. Now consider an application where it occasionally needs to download large amounts of data or do an expensive operation at the start of the process. Since initialDelaySeconds
is a static number, we are forced to always take the worst-case scenario (or extend the failureThreshold
that may affect long-running behaviour) and wait for a long time even when that application does not need to carry out long-running initialization steps. With startup probes, we can instead configure failureThreshold
and periodSeconds
to model this uncertainty better. For example, setting failureThreshold
to 15 and periodSeconds
to 5 means the application will get 10 x 5 = 75s to startup before it fails.
Now that we understand the different types of probes, we can examine the three different ways to configure each probe.
The kubelet sends an HTTP GET request to an endpoint and checks for a 2xx or 3xx response. You can reuse an existing HTTP endpoint or set up a lightweight HTTP server for probing purposes (e.g. an Express server with /healthz
endpoint).
HTTP probes take in additional parameters:
host
: hostname to connect to (default: pod’s IP)
scheme
: HTTP (default) or HTTPS
path
: path on the HTTP/S server
httpHeaders
: custom headers if you need header values for authentication, CORS settings, etc
port
: name or number of the port to access the server
If you just need to check whether or not a TCP connection can be made, you can specify a TCP probe. The pod is marked healthy if can establish a TCP connection. Using a TCP probe may be useful for a gRPC or FTP server where HTTP calls may not be suitable.
Finally, a probe can be configured to run a shell command. The check passes if the command returns with exit code 0; otherwise, the pod is marked as unhealthy. This type of probe may be useful if it is not desirable to expose an HTTP server/port or if it is easier to check initialization steps via command (e.g. check if a configuration file has been created, run a CLI command).
The exact parameters for the probes depend on your application, but here are some general best practices to get started:
For older (≤ 1.15) Kubernetes clusters, use a readiness probe with an initial delay to deal with the container startup phase (use p99 times for this). But make this check lightweight, since the readiness probe will execute throughout the entire lifecycle of the pod. We don’t want the probe to timeout because the readiness check takes a long time to compute.
For newer (≥ 1.16) Kubernetes clusters, use a startup probe for applications with unpredictable or variable startup times. The startup probe may share the same endpoint (e.g. /healthz
) as the readiness and liveness probes, but set the failureThreshold
higher than the other probes to account for longer start times, but more reasonable time to failure for liveness and readiness checks.
Readiness and liveness probes may share the same endpoint if the readiness probes aren’t used for other signalling purposes. If there’s only one pod (i.e. using a Vertical Pod Autoscaler), set the readiness probe to address the startup behaviour and use the liveness probe to determine health. In this case, marking the pod unhealthy means downtime.
Readiness checks can be used in various ways to signal system degradation. For example, if the application loses connection to the database, readiness probes may be used to temporarily block new requests and allow the system to reconnect. It can also be used to load balance work to other pods by marking busy pods as not ready.
In short, well-defined probes generally lead to better resilience and availability. Be sure to observe the startup times and system behaviour to tweak the probe settings as the applications change.
Finally, given the importance of Kubernetes probes, you can use a Kubernetes resource analysis tool to detect missing probes. These tools can be run against existing clusters or be baked into the CI/CD process to automatically reject workloads without properly configured resources.
Polaris: a resource analysis tool with a nice dashboard that can also be used as a validating webhook or CLI tool.
Kube-score: a static code analysis tool that works with Helm, Kustomize, and standard YAML files.
Popeye: read-only utility tool that scans Kubernetes clusters and reports potential issues with configurations.
This section contains steps that are involved in the build and deploy the application.
Checking out code from github
Maven Build process (includes the Junit tests):
> Apache Maven: to manage dependencies for projects. Maven can be installed as a command**-**line tool.
Creating Artifacts (EAR) on successful build process:
> An artefact is an assembly of any project assets that you put together to test, deploy or distribute your software solution or its part. > Examples are a collection of compiled Java classes or a Java application packaged in a Java archive, a Web application as a directory structure or a Web application archive, etc. > An artefact can be an archive file or a directory structure that includes the following structural elements:
Compilation output for one or more of your modules
Libraries included in module dependencies
Collections of resources (web pages, images, descriptor files, etc.)
Other artefacts
Individual files, directories and archives.
Deploy the same to the respective environments:
> maven deploy to nexus - Nexus is the option for hosting third-party artefacts, as well as for reusing internal artefacts across development streams. > Nexus (sonatype) is a repository manager - It allows you to proxy, collect, and manage your dependencies so that you are not constantly juggling a collection of JARs. It makes it easy to distribute your software. Internally, you configure your build to publish artefacts to Nexus and they then become available to other developers.
Inside ERP Stack: Fig.1.0: Graphical Representation of ERP Architecture
1.1 Load Balancer - A load balancer is a device that distributes network or application traffic across a cluster of servers. Load balancing improves responsiveness and increases the availability of applications. 1.2 Apache HTTP server - is a cross-platform web server. A web server is the software application that receives your request to access a web page. It runs a few security checks on your HTTP request and takes you to the web page.
2.1 Application Server - An application server is a software framework that provides both facilities to create web applications and a server environment to run them 2.1 WildFly - is a Java EE 8 certified application server. It provides a list of services as,
JDBC connection pool
ArtemisMQ - messaging broker
Resource adapter
EJB container - where you can deploy remote services
Undertow - lightweight and performant web server
Batch job scheduler to execute tasks and jobs
Redis cache for (Tokens, auth, sessions, etc.,)
Elastic Search
Postgres as DB
Our ERP application architecture follows the 3-tier architecture for web applications*.
This section is to be referred to only if you want the application to run using any IP address or domain name.
Domains should be registered with hosts
Sub-domains must be created and should point to the Load Balancer IP (elasticIP)
> Sub-domains are like: multi-tenant based, environment(DEV/QA/UAT) based, and others like issues.jira, etc., > create name-based virtual hosts for the sub-domains, which helps in picking up the right application servers, and schemas likewise. tenant.env.domain = name name → hosts, schemas, etc., which helps to access the application at right hosts pointing to.
To access the application using IP address:
Have an entry in the table (eg_city) in the database with an IP address of the machine where the application server is running (for ex: domainurl="172.16.2.164") to access the application using the IP address. > Access the application using the URL http://172.16.2.164:8080/egi/ where 172.16.2.164 is the IP and 8080 is the port of the machine where the application server is running.
To access the application using a domain name:
>Have an entry in the table (eg_city) in the database with a domain name (for ex: domainurl= "www.egoverpphoenix.org") to access the application using a domain name. > Add the entry in the hosts file of your system with details as 172.16.2.164 www.egoverpphoenix.org (This needs to be done both in server machine as well as the machines in which the application needs to be accessed since this is not a public domain). > Access the application using the URL http://www.egoverpphoenix.org:8080/egi/ where www.egoverpphoenix.org is the domain name and 8080 is the port of the machine where the application server is running. Always start the wildfly server with the below command to access the application using the IP address or domain name. nohup ./standalone.sh -b 0.0.0.0 &
Two ways of Deployments
Manual Deployment - Copy the EAR files on the deployment folder and start the server.
Hot Deployment - using WildFly management console(it always listens at 9990), and using curl upload and publish the EAR.
Release Process Fig.2.0: Release Process Diagram *References: 3-Tier Architecture in ERP:
A typical enterprise application consists of at least three different types of components:
Presentation layer – Components that handle HTTP requests and implement either a (REST) API or an HTML‑based web UI. In an application that has a sophisticated user interface, the presentation tier is often a substantial body of code.
Business logic layer – Components that are the core of the application and implement the business rules.
Data‑access layer – Components that access infrastructure components such as databases and message brokers.
A three-tier architecture is a client-server architecture in which the functional process logic, data access, computer data storage and user interface are developed and maintained as independent modules on separate platforms. Three-tier architecture is a software design pattern and a well-established software architecture.
Fig.3.0: AWS 3-Tier Architecture Diagram
“Resource Request” and a “Resource Limit” when defining how many resources a container within a pod should receive.
Containerising applications and running them on Kubernetes doesn’t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the “noisy neighbour” problem in a Kubernetes Cluster.
To put things simply, a resource request specifies the minimum amount of resources a container needs to successfully run. Thought of in another way, this is a guarantee from Kubernetes that you’ll always have this amount of either CPU or Memory allocated to the container.
Why would you worry about the minimum amount of resources guaranteed to a pod? Well, its to help prevent one container from using up all the node’s resources and starving the other containers from CPU or memory. For instance, if I had two containers on a node, one container could request 100% of that nodes processor. Meanwhile, the other container would likely not be working very well because the processor is being monopolized by its “noisy neighbour”.
What a resource request can do, is to ensure that at least a small part of that processor’s time is reserved for both containers. This way if there is resource contention, each pod will have a guaranteed, minimum amount of resource in which to still function.
As you might guess, a resource limit is the maximum amount of CPU or memory that can be used by a container. The limit represents the upper bounds of how much CPU or memory that a container within a pod can consume in a Kubernetes cluster, regardless of whether or not the cluster is under resource contention.
Limits prevent containers from taking up more resources on the cluster than you’re willing to let them.
As a general rule, all containers should have a request for memory and CPU before deploying to a cluster. This will ensure that if resources are running low, your container can still do the minimum amount of work to stay in a healthy state until those resource free up again (hopefully).
Limits are often used in conjunction with requests to create a “guaranteed pod”. This is where the request and limit are set to the same value. In that situation, the container will always have the same amount of CPU available to it, no more or less.
At this point, you may be thinking about adding a high “request” value to make sure you have plenty of resources available for your container. This might sound like a good idea, but have dramatic consequences to scheduling on the Kubernetes cluster. If you set a high CPU request, for example, 2 CPUs, then your pod will ONLY be able to be scheduled on Kubernetes nodes that have 2 full CPUs available that aren’t reserved by other pods’ requests. In the example below, the 2 vCPU pods couldn’t be scheduled on the cluster. However, if you were to lower the “request” amount to say 1 vCPU, it could.
Let us try out using a CPU limit on a pod and see what happens when we try to request more CPU than we’re allowed to have. Before we set the limit though, let us look at a pod with a single container under normal conditions. I’ve deployed a resource consumer container in my cluster and by default, you can see that I am using 1m CPU(cores) and 6 Mi(bytes) of memory.
NOTE: CPU is measured in millicores so 1000m = 1 CPU core. Memory is measured in Megabytes.
Ok, now that we have seen the “no-load” state, let us add some CPU load by making a request to the pod. Here, I’ve increased the CPU usage on the container to 400 millicores.
After the metrics start coming in, you can see that I’ve got roughly 400m used on the container as you’d expect to see.
Now I’ve deleted the container and we’ll edit the deployment manifest so that it has a limit on CPU.
After redeploying the container and again increasing my CPU load to 400m, we can see that the container is throttled to 300m instead. I’ve effectively “limited” the resources the container could consume from the cluster.
OK, next, I’ve deployed two pods into my Kubernetes cluster and those pods are on the same worker node for a simple example about contention. I’ve got a guaranteed pod that has 1000m CPU set as a limit but also as a request. The other pod is unbounded, meaning there is no limit on how much CPU it can utilize.
After the deployment, each pod is really not using any resources as you can see here.
We make a request to increase the load on my non-guaranteed pod.
And if we look at the container's resources you can see that even though my container wants to use 2000m CPU, it’s only actually using 1000m CPU. The reason for this is because the guaranteed pod is guaranteed 1000m CPU, whether it is actively using that CPU or not.
Kubernetes uses Resource Requests to set a minimum amount of resources for a given container so that it can be used if it needs it. You can also set a Resource Limit to set the maximum amount of resources a pod can utilize.
Taking these two concepts and using them together can ensure that your critical pods always have the resources that they need to stay healthy. They can also be configured to take advantage of shared resources within the cluster.
Be careful setting resource requests too high so your Kubernetes scheduler can still scheduler these pods. Good luck!
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This section contains architectural details about DIGIT deployment. It discusses the various activities in a sequence of steps to provision required infra and deploy DIGIT.
Every code commit is well-reviewed and squash merge to branches through Pull Requests.
Trigger the CI Pipeline that ensures code quality, vulnerability assessments, CI tests before building the artefacts.
Artefact is version controlled based on Semantic versioning based on the nature of the change.
After successful CI, Jenkins bakes the Docker Images with the versioned artefacts and pushes the baked docker image to Docker Registry.
Deployment Pipeline pulls the built Image and pushes to the corresponding Env.
DIGIT has built helm charts to using the standard helm approach to ease managing the service-specific configs, customisations, switch/toggle, secrets, etc.
Golang base Deployment script that reads the values from the helm charts template and deploys into the cluster.
Each env will have one master yaml template that will have the definition of all the services to be deployed, their dependencies like Config, Env, Secrets, DB Credentials, Persistent Volumes, Manifest, Routing Rules, etc..
This page provides information on how to deploy DIGIT services on Kubernetes, prepare deployment manifests for various services along with its configurations, secrets. etc. It also discusses the maintenance of environment-specific changes.
All content on this page by is licensed under a .
In Kubernetes, an Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. You configure access by creating a collection of rules that define which inbound connections reach which services.
This lets you consolidate your routing rules into a single resource. For example, you might want to send requests to example.com/api/v1/ to an api-v1 service, and requests to example.com/api/v2/ to the api-v2 service. With an Ingress, you can easily set this up without creating a bunch of LoadBalancers or exposing each service on the Node.
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
For clarity, this guide defines the following terms:
Node: A worker machine in Kubernetes, part of a cluster.
Cluster: A set of Nodes that run containerized applications managed by Kubernetes. For this example, and in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
Ideally, all Ingress controllers should fit the reference specification. In reality, the various Ingress controllers operate slightly differently.
An Ingress resource example:
Each HTTP rule contains the following information:
A list of paths (for example, /testpath), each of which has an associated backend defined with a service.name and a service.port.name or service.port.number. Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
A defaultBackend is often configured in an Ingress controller to service any requests that do not match a path in the spec.
All content on this page by is licensed under a .
eGovernments Foundation transforms urban governance with the use of scalable and replicable technology solutions that enable efficient and effective municipal operations, better decision making, and contact-less urban service delivery.
Our comprehensive software products enable Governments to put their resources to efficient use by minimising overheads. We also help bring in transparency, accountability and citizen centricity in the delivery of Government services.
eGovernments Foundation has been in the forefront of implementing eGovernance solutions since 2003. Our products have been serving over 325 ULBs across the country. Our time tested products have impacted the ULBs in a large way. We have also been involved in several eGovernance initiatives in the country.
Our primary business motivator is to increase the footprint of eGovernance across the country and help adoption in as many ULBs as possible. Going opensource with our products is a measure in this direction. It also gives us the ability to tap into the immense talent pool in India for strengthening and improving our cities. Open source also blends well with our ethical fabric of being open and transparent in our business.
Report issues via the .
The eGov suit is released under version 3.0 of the .
Once the cluster is ready and healthy you can start deploying backbones services.
Deploy configuration and deployment in the following Services Lists
Backbone (Redis, ZooKeeper-v2, Kafka-v2,elasticsearch-data-v1, elasticsearch-client-v1, elasticsearch-master-v1)
Gateway (Zuul, nginx-ingress-controller)
Understanding of VM Instances, LoadBalancers, SecurityGroups/Firewalls, nginx, DB Instance, Data Volumes.
Experience of Kubernetes, Docker, Jenkins, helm, golang, Infra-as-code.
Deploy configuration and deployment backbone services:
Clone the git repo . Copy existing and with new environment name (eg..yaml and-secrets.yaml)
Modify the global domain and set namespaces create to true
Modify the below-mentioned changes for each backbone services:
Eg. For Kafka-v2 If you are using AWS as cloud provider, change the respective volume id’s and zone’s
(You will get the volume id’s and zone details from either remote state bucket or from AWS portal)
Eg. Kafka-v2 If you are using Azure cloud provider, change the diskName and diskUri
(You will get the volume id’s and zone details from either remote state bucket or from Azure portal)
Eg. Kafka-v2 If you are using ISCSI , change the targetPortal and iqn.
Deploy the backbone services using go command
Modify the “dev” environment name with your respective environment name.
Flags:
e --- Environment name
p --- Print the manifest
c --- Enable Cluster Configs
Check the Status of pods
This section contains steps that are involved in the build and deploy the application. FAQ related to various deployment and development issues are discussed
Clone the eGov repository (development is done on the develop branch).
1$ mkdir -p ${HOME}/egovgithub && cd egovgithub 2$ git clone -b develop --single-branch <https://github.com/egovernments/egov-smartcity-suite.git>
First time setup which will install the stacks, build the source code, and deploys the artefact to Wildfly
1$ cd ${HOME}/egovgithub/egov-smartcity-suite && make all
To install the prerequisites Phoenix stacks
1$ cd ${HOME}/egovgithub/egov-smartcity-suite && make install
To build the source code base
1$ cd ${HOME}/egovgithub/egov-smartcity-suite && make build
To deploy the artefact to WILDFLY
1$ cd ${HOME}/egovgithub/egov-smartcity-suite && make deploy
Prerequisites
Install
Install
Install
Install
Install
Install
Database Setup
Create a database and user in postgres
Create a schema called generic
Execute ALTER ROLE <your_login_role> SET search_path TO generic,public;
Elastic Search Setup
Elastic search server properties need to be configured in elasticsearch.yml under <ELASTICSEARCH_INSTALL_DIR>/config1## Your local elasticsearch clustername, DO NOT use default clustername 2cluster.name: elasticsearch-<username> 3## This is the default port 4transport.tcp.port: 9300 5
NB: <username> user name of the logged-in system, enter the below command in terminal to find the username.1$ id -un
Building Source
Clone the eGov repository (development is done on the develop branch.
1$ mkdir egovgithub 2$ cd egovgithub 3$ git clone <https://github.com/egovernments/egov-smartcity-suite.git> 4$ git checkout develop
Change directory to <CLONED_REPO_DIR>/egov/egov-config/src/main/resources/config/ and create a file called egov-erp-<username>.properties and enter the following values based on your environment config.
1##comma separated list of host names 2elasticsearch.hosts=localhost 3elasticsearch.port=9300 4elasticsearch.cluster.name=elasticsearch-<username> 5
If required, you can override any default settings available in /egov/egov-egi/src/main/resources/config/application-config.properties by overriding the value in egov-erp-<username>.properties.
Change directory back to <CLONED_REPO_DIR>/egov
Run the following commands. This will clean, compile, test, migrate the database and generate ear artefact along with jars and wars appropriately.
1mvn clean package -s settings.xml -Ddb.user=<db_username> -Ddb.password=<db_password> -Ddb.driver=org.postgresql.Driver -Ddb.url=<jdbc_url>
Redis Server Setup
By default eGov suit uses an embedded redis server (work only in Linux & OSx), to make the eGov suit work in Windows OS or if you want to run redis server as a standalone then follow the installation steps below.
Installing redis server on Linux
1sudo apt-get install redis-server
Once installed, set the below property in egov-erp-override.properties or egov-erp-<username>.properties.
1## true by default 2redis.enable.embedded=false
to control the redis server host and port use the following property values (only required if installed with non-default).1## Replace <your_redis_server_host> with your redis host, localhost by default 2redis.host.name=<your_redis_server_host> 3## Replace <your_redis_server_port> with your redis port, 6379 by default 4redis.host.port=<your_redis_server_port>
Deploying Application
Configuring JBoss Wildfly
In case, properties needs to be overridden, edit the below file (This is only required if egov-erp-<username>.properties is not present)
1<JBOSS_HOME>/modules/system/layers/base/ 2 3org 4└── egov 5 └── settings 6 └── main 7 ├── config 8 │ └── egov-erp-override.properties 9 └── module.xml
Update settings in standalone.xml under <JBOSS_HOME>/standalone/configuration
Check Datasource setting is in sync with your database details.
1<connection-url>jdbc:postgresql://localhost:5432/<YOUR_DB_NAME></connection-url> 2<security> 3 <user-name><YOUR_DB_USER_NAME></user-name> 4 <password><YOUR_DB_USER_PASSWORD></password 5</security>
Check HTTP port configuration is correct in
1<socket-binding name="http" port="${jboss.http.port:8080}"/>
Change directory back to <CLONED_REPO_DIR>/egov/dev-utils/deployment/ and run the below command
1$ chmod +x deploy.sh 2$ ./deploy.sh
Alternatively, this can be done manually by following the below steps.
Copy the generated exploded ear <CLONED_REPO_DIR>/egov/egov-ear/target/egov-ear-<VERSION>.ear in to your JBoss deployment folder <JBOSS_HOME>/standalone/deployments
Create or touch a file named egov-ear-<VERSION>.ear.dodeploy to make sure JBoss picks it up for auto-deployment
Start the wildfly server by executing the below command
1 $ cd <JBOSS_HOME>/bin/ 2 $ nohup ./standalone.sh -b 0.0.0.0 & 3
In Mac OSx, it may also required to specify -Djboss.modules.system.pkgs=org.jboss.byteman
-b 0.0.0.0 only required if the application accessed using an IP address or domain name.
Monitor the logs and in case of successful deployment, just hit <http://localhost:<YOUR_HTTP_PORT>/egi> in your favourite browser.
Log in using the username as egovernments and password demo
Accessing the application using IP address and domain name
This section is to be referred to only if you want the application to run using any IP address or domain name.
1. To access the application using an IP address:
Have an entry in eg_city table in the database with an IP address of the machine where the application server is running (for ex: domainurl="172.16.2.164") to access the application using the IP address.
2. To access the application using the domain name:
Always start the wildfly server with the below command to access the application using IP address or domain name.1 nohup ./standalone.sh -b 0.0.0.0 &
This section gives more details regarding developing and contributing to the eGov suit.
Repository Structure
egov - folder contains all the source code of eGov opensource projects
Check out sources
git clone git@github.com:egovernments/egov-smartcity-suite.git or git clone <https://github.com/egovernments/egov-smartcity-suite.git>
Prerequisites
Install your favourite IDE for the Java project. Recommended Eclipse or IntelliJ IDEA
Note: Please check-in [eGov Tools Repository] for any of the above software installables before downloading from the Internet.
1. Eclipse Deployment
Import the cloned git repo using maven Import Existing Project.
Install Jboss Tools and configure Wildfly Server.
Since jasperreport related jar's are not available in maven central, we have to tell eclipse to find jar's in alternative place for that navigate to Windows -> Preference -> Maven -> User Settings -> Browse Global Settings and point settings.xml available under egov-erp/
Now add your EAR project into the configured Wildfly server.
Start Wildfly in debug mode, this will enable hot deployment.
2. Intellij Deployment
Install Intellij
Open project
In project settings set JDK to 1.8
Add a run configuration for JBoss and point the JBOSS home to the wildfly unzipped folder
Run
3. Database Migration Procedure
Any new sql files created should be added under directory <CLONED_REPO_DIR>/egov/egov-<javaproject>/src/main/resources/db/migration
Core product DDL and DML should be added under <CLONED_REPO_DIR>/egov/egov-<javaproject>/src/main/resources/db/migration/main
Core product sample data DML should be added under <CLONED_REPO_DIR>/egov/egov-<javaproject>/src/main/resources/db/migration/sample
All SQL scripts should be named in the following format.
Format V<timestamp-in-YYYYMMDDHHMMSS-format>__<module-name>_<description>.sql
DB migration will automatically happen when the application server starts, in case required while maven build to use the above-given maven command.
Migration file name sample
1V20150918161507__egi_initial_data.sql 2
Note: This system is supported
OS:-
Linux (Recommended)
Mac
Windows (If Redis server standalone installed).
Browser:-
Chrome (Recommended)
Firefox
Internet Explorer
All content on this page by is licensed under a .
This section addresses the key areas of concern and its potential remedial steps.
All content on this page by is licensed under a .
A good/meaningful logging system is a system that everyone can use and understand. How Digit Logging is configured.
The logging concern is one of the most complicated parts of our microservices. Microservices should stay as pure as possible. So, we shouldn’t use any library if we can (like logging, monitoring, resilience library dependencies). It means, every dependency can change any time and then usually, we must do that change for the other microservices. There is a lot of work here. Instead of that, we need to handle these dependencies with a more generic way. For logging, the way is the stdout logging. For most of the programming languages, logging to stdout is the default way and probably no additional change required at the beginning.
In MSA, services interact with each other through an HTTP endpoint. End users only know about API Contract (Request/Response), and don’t know how exactly do services work.
“A service” will call “B service” and “C service”. Once the request chain is complete, “X service” might be able to respond to the end-user who initiated the request. Let’s say you already have a logging system that captures error logs for each service. If you find an error in “X service”, it would be better if you know exactly whether the error was caused by “A service” or “C service”. If the error is informative enough for you. But if that isn’t the case, the correct way to reproduce that error is to know all requests and services that involved. Once you implement Correlation Id, you only need to look for that ID in the logging system. And you will get all logs from services that were part of the main request to the system.
The application usually adds more features as time goes by. Go along with this, there are so many services will be created new (my project started with 12 services, and now we have 20). These services could be hosted on different servers. Let’s imagine, what will happen if you store logging on different servers? — you will have to access to each individual server to read logs, then trying to correlate problems. Instead, you have everything that you need in one dashboard by centralized logging data in one place. If would save your time so much.
Applying MSA allows you to use different technology stacks for each service. For example, you can use .Net Core for Buy service, Java for Shipping service and Python for Inventory service. However, it also impacts to log format of each service. It’s even more complicated as some logs need more fields than others.
Based on my experience, I’d like to suggest JSON as a standard format for logging data. JSON allows you to have multiple levels for your data so that, when necessary, you can get more semantic info in a single log event.
When we see the log one would want to know everything! What? When? Where?… even Who? — don’t think that we need to know exactly which person causes the problem to blame them :) Because, contacting the right person also helps you to resolve issues quicker. You can log all the data that you get. However, let us give some specific fields. This might help to figure out what really need to log.
When? — Time (with full date format): It doesn’t require using UTC format. But the timezone has to be the same for everyone that needs to look at the logs.
What? — Stack errors: All exception objects should be passed to the logging system.
Where? — Besides service name as we using MSA. We also need function name, class or file name where the error occurred. — Don’t guess anything, it might waste your time.
Who? — The IP address of the client and user name if any. Make sure don’t use this information to blame your teammates :)
Bear in mind that, logging system is not only for developers. It’s also used by others (system admin, tester…) So, you should consider logging data that everyone can use and understand.
There are two techniques for logging in MSA. Each service will implement the logging mechanism by itself and using one logging service for all services. Both of them have Good and Not Good points. — I’m using both these approaches in my project.
Implement Logging in each service
With this approach, we can easily define the logging strategy/library for each service. For example, with service written by java we can use Log4j.
The problem with this approach is that it requires each service to implement its own logging methods. Not only is this redundant, but it also adds complexity and increases the difficulty of changing logging behaviour across multiple services.
2. Implement central Logging service
If you don’t want to implement logging in each service separately. You can consider implementing a central service for logging. This service will help you with processing, formatting and storing log data.
This approach might help to reduce the complexity of your application. However, you might get lost your log data if that service is down.
This section contains docs and information resources that guide you through the key DevOps concepts and its role in managing the DIGIT platform.
All content on this page by is licensed under a .
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
As all the DIGIT services that are containerized and deployed on Kubernetes, we need to prepare deployment manifests. The same can be found .
All content on this page by is licensed under a .
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes .
Service: A Kubernetes that identifies a set of Pods using selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
exposes HTTP and HTTPS routes from outside the cluster to within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type or .
You must have an to satisfy an Ingress. Only creating an Ingress resource has no effect.
You may need to deploy an Ingress controller such as . You can choose from a number of .
As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields. The name of an Ingress object must be a valid . For general information about working with config files, see , , . Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which is the . Different support different annotations. Review the documentation for your choice of Ingress controller to learn which annotations are supported.
The Ingress has all the information needed to configure a load balancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP(S) traffic.
An optional host. In this example, no host is specified, so the rule applies to all inbound HTTP traffic through the IP address specified. If a host is provided (for example, ), the rules apply to that host.
A backend is a combination of Service and port names as described in the or a by way of a . HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.
Learn about the
Learn about
All content on this page by is licensed under a .
(
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Installing redis server on Windows:- There is no official installable available for Windows OS. To install redis on Windows OS, follow the instruction given in
Download and unzip the customized JBoss Wildfly Server from . This server contains some additional jars that are required for the ERP.
Access the application using the URL where 172.16.2.164 is the IP and 8080 is the port of the machine where the application server is running.
Have an entry in eg_city table in the database with the domain name (for ex: domainurl= "") to access the application using the domain name.
Add the entry in the host file of your system with details as 172.16.2.164 (This needs to be done both in server machine as well as the machines in which the application needs to be accessed since this is not a public domain).
Access the application using an URL where is the domain name and 8080 is the port of the machine where the application server is running.
Install
Install
Install
Install
Install
Install
Install
For more details refer
All content on this page by is licensed under a .
Sometimes, you log requests from end-users that contain PII. We need to be careful, it might violate .
All content on this page by is licensed under a .
Prometheus is an open-source system monitoring and alerting toolkit originally built at SoundCloud.
prometheus-operator chart includes multiple components and is suitable for a variety of use-cases.
The default installation is intended to suit monitoring a kubernetes cluster the chart is deployed onto. It closely matches the kube-prometheus project.
service monitors to scrape internal kubernetes components
kube-apiserver
kube-scheduler
kube-controller-manager
etcd
kube-dns/coredns
kube-proxy
With the installation, the chart also includes dashboards and alerts.
Deployment steps
Add environment variable to the respective env config file
Update the configs branch (like for qa.yaml added qa branch)
Add monitoring-dashboards folder to respective configs branch.
Enable the nginx-ingress monitoring and redeploy the nginx-ingress.
Add alertmanager secret in respective.secrets.yaml
If you want you can change the slack channel and other details like group_wait , group_interval and repeat_interval according to your values.
Deploy the prometheus-operator using go cmd or deploy using Jenkins.
To create a new panel in the existing dashboard
Login to dashboard and click on add panel
Set all required queries and apply the changes. Export the JSON file by clicking on t the save dashboard
Update the existing *-dashboard.json file from configs monitoring-dashboards folder with a newly exported JSON file.
We will discuss distributed tracing system Jaeger and how it helps in troubleshooting DIGIT.
Distributed tracing is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance.
OpenTracing has been a key capability when it comes to microservices-based distributed systems like DIGIT. We’ll start with the introduction of OpenTracing, explaining what it is and why it is important We shall also set up Jaeger and learn to use it for monitoring and troubleshooting.
Microservice Architecture has now become the obvious choice for application developers. In the Microservice Architecture, a monolithic application is broken down into a group of independently deployed services. In simple words, an application is more like a collection of microservices. When we have millions of such intertwined microservices working together, it’s almost impossible to map the inter-dependencies of these services and understand the execution of a request.
In case of a failure in a monolithic application, it is much easier to understand the path of a transaction and do the root cause analysis with the help of logging frameworks. But in a microservice architecture, logging alone fails to deliver the complete picture.
Is this service the first one in the call chain? How do I span all these services to get insight into the application? With questions like these, it becomes a significantly larger problem to debug a set of interdependent distributed services in comparison to a single monolithic application, making OpenTracing more and more popular.
The OpenTracing API provides a standard, vendor-neutral framework for instrumentation. This means that if a developer wants to try out a different distributed tracing system, then instead of repeating the whole instrumentation process for the new distributed tracing system, the developer can simply change the configuration of the Tracer.
Here are some basic terminologies of Opentracing:
Span — It represents a logical unit of work that has an operation name, the start time of the operation, and the duration.
Trace — A Trace tells the story of a transaction or workflow as it propagates through a distributed system. It is simply a set of spans sharing a TraceID. Each component in a distributed system contributes its own span.
OpenTracing is a way for services to “describe and propagate distributed traces without knowledge of the underlying OpenTracing implementation.”
Let us take the example of a service like egov-property service (or any other DIGIT service). A service like this requires many other microservices to check that the location is available, proper payment credentials are received, and enough details exist for the ULB to process the property tac. If either one of those microservice fails, then the entire transaction fails. In such a case, having logs just for the main property service wouldn’t be very useful for debugging. However, if you were able to analyze each service you wouldn’t have to scratch your head to troubleshoot which microservice failed and what made it fail.
In real life, applications are even more complex and with the increasing complexity of applications, monitoring the applications has been a tedious task. Opentracing helps us to easily monitor:
Spans of services
Time taken by each service
Latency between the services
Hierarchy of services
Errors or exceptions during execution of each service.
Jaeger is used for monitoring and troubleshooting microservices-based distributed systems, including:
Distributed transaction monitoring
Performance and latency optimization
Root cause analysis
Service dependency analysis
Distributed context propagation
Jaeger Client Libraries — Jaeger clients are language-specific implementations of the OpenTracing API.
Agent — The Jaeger agent is a network daemon that listens for spans sent over UDP, which it batches and sends to the collector. It is designed to be deployed to all hosts as an infrastructure component. The agent abstracts the routing and discovery of the collectors away from the client.
Collector — The Jaeger collector receives traces from Jaeger agents and runs them through a processing pipeline. Currently, the pipeline validates traces, indexes them, performs transformations, and finally, stores them. Jaeger’s storage is a pluggable component which currently supports Cassandra, Elasticsearch, and Kafka.
Query — Query is a service that retrieves traces from storage and hosts a UI to display them.
Ingester — Ingester is a service that reads from Kafka topic and writes to another storage backend (Cassandra, Elasticsearch).
First, install Jaeger Client on your machine:
Now, let’s run Jaeger backend as an all-in-one Docker image. The image launches the Jaeger UI, collector, query, and agent:
TIP: To check if the docker container is running, use: Docker ps.
Once the container starts, open http://localhost:16686/ to access the Jaeger UI. The container runs the Jaeger backend with an in-memory store, which is initially empty, so there is not much we can do with the UI right now since the store has no traces.
1. Create a Python program to create Traces
Let’s generate some traces using a simple python program. You can clone the Jaeger-Opentracing repository given below for a sample program that is used in this blog_._
The Python program takes a movie name as an argument and calls three functions that get the cinema details, movie showtime details, and finally, book a movie ticket.
It creates some random delays in all the functions to make it more interesting, as, in reality, the functions would take a certain time to get the details. Also, the function throws random errors to give us a feel of how the traces of a real-life application may look like in case of failures.
Here is a brief description of how OpenTracing has been used in the program:
Initializing a tracer:
Using the tracer instance:
Starting new child spans using start_span:
Using Tags:
Using Logs:
2. Run the python program
Now, check your Jaeger UI, you can see a new service “booking” added. Select the service and click on “Find Traces” to see the traces of your service. Every time you run the program a new trace will be created.
You can now compare the duration of traces through the graph shown above. You can also filter traces using “Tags” section under “Find Traces”. For example, Setting “error=true” tag will filter out all the jobs that have errors.
To view the detailed trace, you can select a specific trace instance and check details like the time taken by each service, errors during execution and logs.
In this blog, we’ve described the importance and benefits of OpenTracing, one of the core pillars of modern applications. We also explored how distributed tracer Jaeger collect and store traces while revealing inefficient portions of our applications. It is fully compatible with OpenTracing API and has a number of clients for different programming languages including Java, Go, Node.js, Python, PHP, and more.
Software Architecture Diagram
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.