Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Details coming soon...
Details coming soon...
Learn all about setting up DIGIT and its various components.
Details for these pages coming up soon...
Details coming soon...
This section contains documents and information required to configure the DIGIT platform
Learn how to configure the DIGIT platform. Partner with us to enhance and integrate more into the platform.
Summary of DIGIT OpenSource GitRepos and it's purpose. If you are a partner/contributor you may choose to fork or clone depending on need and capacity.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Details coming soon...
Learn how to setup DIGIT master data.
Details coming soon...
Details coming soon...
MDMS stands for Master Data Management Service. MDMS is One of the applications in the eGov DIGIT core group of services. This service aims to reduce the time spent by developers on writing codes to store and fetch master data ( primary data needed for module functionality ) which doesn’t have any business logic associated with them.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of Git.
Advanced knowledge on how to operate JSON data would be an added advantage to understand the service.
The MDMS service reads the data from a set of JSON files from a pre-specified location.
It can either be an online location (readable JSON files from online) or offline (JSON files stored in local memory).
The JSON files will be in a prescribed format and store the data on a map. The tenantID of the file serves as a key and a map of master data details as values.
Once the data is stored in the map the same can be retrieved by making an API request to the MDMS service. Filters can be applied in the request to retrieve data based on the existing fields of JSON.
For deploying the changes in MDMS data, the service needs to be restarted.
The changes in MDMS data could be adding new data, updating existing data, or deletion.
The config JSON files to be written should follow the listed rules
The config files should have JSON extension
The file should mention the tenantId, module name, and the master name first before defining the data
Title
Description
tenantId
Serves as a Key
moduleName
Name of the module to which the master data belongs
MasterName
The Master Name will be substituted by the actual name of the master data. The array succeeding it will contain the actual data.
Example Config JSON for “Billing Service”
Title
Link
Reference Doc Link 1
MDMS-Service
Reference Doc Link 2
MDMS-Rewritten
Link
API Contract Reference
Configuring Master Data for a new module requires creating a new module in the master config file and adding masters data. For better organizing, create all the master data files belongs to the module in the same folder. Organizing in the same folder is not mandatory it is based on the moduleName in the Master data file.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permissions to edit the git repository where MDMS data is configured.
These data can be used to validate the incoming data.
After adding the new module data, the MDMS service needs to be restarted to read the newly added data.
Adding new module
The Master config file is structured as below. Each key in the Master config is a module and each key in the module is a master.
The new module can be added below the existing modules in the master config file.
Creating Masters data
Please check the link to create new master Adding New Master
Title
Link
Sample Master config file
Sample Module folder
MDMS supports the configuration of data at different levels. While we enable a state there can be data that is common to all the ULB’s of the state and data specific to each ULB’s. The data further can be configured at each module level as state-specific or ULB’s specific.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of Git.
Advanced knowledge on how to operate JSON data would be an added advantage to understand the service.
State Level Masters are maintained in a common folder.
ULB Level Masters are maintained in separate folders named after the ULB.
Module Specific State Level Masters are maintained by a folder named after the specific module that is placed outside the common folder.
For deploying the changes(adding new data, updating existing data or deletion) in MDMS, the MDMS service needs to be restarted.
State Level Master Configuration
The common master data across all ULB’s and modules like department, designation, etc are placed under the common-masters folder which is under the tenant folder of the MDMS repository.
The common master data across all ULB’s and are module-specific are placed in a folder named after each module. These folders are placed directly under the tenant folder.
ULB Level Master Configuration
Modules data that are specific to each ULB like boundary data, interest, penalty, etc are configured each ULBwise. There will be a folder per ULB under the tenant folder and all the ULB’s module-specific data are placed under this folder.
Content of pages within this document is designed to help implementation parties and end-users in providing the required data in minimal interaction and iterations and ensure the quality, consistency and shape of data needed to configure into the system.
This page is intended to help stakeholders as given below on data gathering activities.
State Team
eGov Onsite Team/ Implementation Team
ULB Team (Nodal and DEO)
Implementation Partners
The artefacts of this document are the data template of a configurable entity, a page with content defining the entity template and helping on how to fill the template with required data.
All content on this page by is licensed under a .
DIGIT environment setup is conducted at two levels.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
ex: ///common-masters/ Here “pb” is the tenant folder name.
ex: ///TradeLicense/ Here “pb” is the tenant folder name and “TradeLicense“ is the module name.
ex: ////TradeLicense/ Here “amritsar“ is the ULB name and “TradeLicense“ is the module name. All the data specific to this module for the ULB are configured inside this folder.
All content on this page by is licensed under a .
Link
API Contract Reference
Title
Link
State Level Common-Master Data
State Level Module Specific Common-Master Data
ULB Specific Data
An Urban Local Body (ULB) is defined as a tenant. The information which describes the various attributes of a ULB is known as tenant information. This detail is required to add the ULB into the system.
1
Sonepur Nagar Panchayat
47
Corp
Sonepur
Sonepur
Banka
BN47
Bihar
BBD47
98362532657
Main Hall, Sonepur
24.8874° N
86.9198° E
snp@bihar.gov.in
Data given in the table is a sample data.
1
ULB Name
Text
256
Yes
Name of ULB. E.g. Kannur Municipal Corporation/ Saptarishi Municipal Council
2
ULB Code
Alphanumeric
64
Yes
It is a unique identifier which is assigned to each ULB. LGD (Local Government Directory) has already assigned a code urban local bodies and the same is used here
3
ULB Grade
Alphanumeric
64
Yes
Grade of ULB. e.g. Corporation, Municipality, Nagar Panchayat etc
4
City Name
Text
256
Yes
Name of city/ town which is covered by the ULB. E.g. Kannur/ Saptarishi
5
City Local Name
Text
256
No
Name of the city in the local language. e.g Telugu, Hindi etc
6
District Name
Text
256
Yes
Name of the District where the city is situated
7
District Code
Alphanumeric
64
Yes
It is a unique identifier which is assigned to each district. LGD (Local Government Directory) has already assigned code districts and the same is used here
8
Region Name
Text
256
No
Name of the region the listed district belongs to
9
Region Code
Alphanumeric
64
No
Unique code of the region to uniquely identify it
10
Contact Number
Alphanumeric
10
Yes
Contact person phone no. of ULB
11
Address
Text
256
Yes
Postal address of the ULB for the correspondence
12
ULB Website
Alphanumeric
256
Yes
URL address of the website for the ULB
13
Email Address
Alphanumeric
64
No
Email of the address of ULB where the email from the citizen can be received
14
Latitude
Alphanumeric
64
No
Latitude part of coordinates of the centroid of the city
15
Longitude
Alphanumeric
64
No
Longitude part of coordinates of the centroid of the city
16
GIS Location Link
Text
NA
No
GIS Location link of the ULB
17
Call Center No
Alphanumeric
10
No
Call centre contact number of ULB
18
Facebook Link
Text
NA
No
Face book page link of ULB
19
Twitter Link
Text
NA
No
Twitter page link of the ULB
20
Logo file Path
Document
NA
Yes
URL of logo file path to download the logo of ULB
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning as given in this document under section 'Data Definition'.
Make sure all the headers, its data type, field size and its definition/ description are understood properly.
In case of any doubt, please reach out to the person who has shared this document with you and discuss the same to clear out the doubts.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist by taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed once the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
To see common checklist refer to the page Checklist consisting of all the activities which are to be followed to ensure completeness and quality of data.
This checklist covers the activities which are specific to the entity. There are no checklist activities exists which are specific to the entity.
Tenant represents a body in a system. In the municipal system, a state and its ULBs (Urban local bodies) are tenants. ULB represents a city or a town in a state. Tenant configuration is done in MDMS.
Before proceeding with the configuration, the following pre-requisites are met -
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
For login page city name selection is required. Tenant added in MDMS shows in city drop-down of the login page.
In reports or in the employee inbox page the details related to ULB is displayed from the fetched ULB data which is added in MDMS.
Modules i.e., TL, PT, MCS can be enabled based on the requirement for the tenant.
After adding the new tenant, the MDMS service needs to be restarted to read the newly added data.
Tenant is added in tenant.json. In MDMS, file tenant.json, under tenant folder holds the details of state and ULBs to be added in that state.
To enable tenant the above data should be pushed in tenant.json file. Here "ULB Grade" and City "Code" are important fields. ULB Grade can have a set of allowed values that determines the ULB type, (Municipal corporation (Nagar Nigam), Municipality (municipal council, municipal board, municipal committee) (Nagar Parishad), etc). City "Code" has to be unique to each tenant. This city-specific code is used in all transactions. Not permissible to change the code. If changed we will lose the data of the previous transactions done.
Naming Convention for Tenants Code
“Code”:“uk.citya” is StateTenantId.ULBTenantName"
"logoId": "https://s3.ap-south-1.amazonaws.com/uk-egov-assets/uk.citya/logo.png", Here the last section of the path should be "/<tenantId>/logo.png". If we use anything else, logo will not be displayed on the UI. <tenantId> is the tenant code ie “uk.citya”.
Localization should be pushed for ULB grade and ULB name. The format is given below.
Localization for ULB Grade
Localization for ULB Name
Format of localization code for tenant name <MDMS_State_Tenant_Folder_Name>_<Tenants_Fille_Name>_<Tenant_Code> (replace dot with underscore)
Boundary data should be added for the new tenant.
Title
Link
tenant json file
content
An email account of the client/state team has to be set up in order to receive/send the email notifications.
In order to achieve the functionality, an email account has to be set up at there server since most of the states would defer from creating an account with the Gmail/public server. Further, this email account has to be integrated with the various DIGIT modules.
In order to achieve the above functionality, we require the below-mentioned details
1
Bihar
POP3
SMTP
SMTP
****
192.172.82.12
192.172.82.12
Auto
14
The values mentioned here are sample data.
1
Email ID
Alphanumeric
N/A
Yes
Email id which is being configured
2
Your Name
Text
256
Yes
The name on behalf of which the email would be sent in order to receive the updates
3
Account Type
Alphanumeric
64
Yes
The type of email account type protocol which will be used to download messages
4
Incoming Mail Server
Numeric
(12,2)
Yes
The IP address of the email server through which messages would be received
5
Outgoing Mail Server(SMTP)
Numeric
(12,2)
Yes
The IP address of the email server through which messages would be sent
6
Password
Alphanumeric
64
Yes
The password of the email server
7
Incoming Server POP3 Port
Numeric
(12,2)
Yes
The port number through which the emails are received
8
Outgoing server SMTP Port
Numeric
(12,2)
Yes
The port number through which the emails are to be sent
9
Encrypted Connection Type
Alphanumeric
64
Yes
The encryption type which is used for the connection
10
Days after which the email should be removed from the server
Numeric
(12,2)
Yes
The number of days after which the email should be deleted from the server (not from the local device)
Below steps could be followed in order to fill the template:
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Ask the state to gather all the data related to the technical configuration from the email server settings.
Get the attached template filled from the state and a sample data is provided in the data table section for reference.
The data would be available in the POP and IMAP account settings at the server level.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
Not Applicable
For creating a new master in MDMS, create the JSON file with the master data and configure the newly created master in the master config file.
Before proceeding with the configuration, make sure the following pre-requisites are met -
User with permissions to edit the git repository where MDMS data is configured.
After adding the new master, the MDMS service needs to be restarted to read the newly added data.
Creating Master JSON The new JSON file needs to contain 3 keys as shown in the below code snippet. The new master can be created for Statewide or ULB wise. Tenant id and config in the master config file determines this.
Configuring the master config file The Master config file is structured as below. Each key in the Master config is a module and each key in the module is a master.
Each master contain the following data and keys are self-explanatory
Title
Link
Sample Master file
Sample Master configuration
The domain name is the address through which the internet users can access the website rather than entering the whole IP address in the search bar of the browser.
This domain name is ideally chosen by the state/client since its a product which has to be used for/by them.
Following is the table through which the information can be shared.
192.78.98.12
Data given in the table is a sample data.
Since all state governments/clients prefer to host the websites on their servers, this activity is ideally done by them.
Domain Name
Alphanumeric
253
Yes
The name/address of the website being used to access the website/ module
EXTERNAL-IP
Alphanumeric
32
Yes
It is the IP address that has to be mapped to the domain name
Following are the steps which are to be followed:
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
If the state agrees to host the website on their server, provide them with the 2 columns mentioned in the attached template.
If the state disagrees to host on their server, then a domain name has to be purchased by any of the external vendors and the EXTERNAL-IP address has to be mapped with them.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of.
This checklist covers the activities which are specific to the entity:
No mistake should be done in providing the EXTERNAL-IP address
-
2.
Only one domain name and its corresponding IP address have to be provided
-
Whenever an android mobile App is developed it has to be published on the Google play store in order to let the users avail its service. This page provides information about configuring the google play store account to make DIGIT mobile apps available for easy download.
In order to start the configuration for the google play store following would be required:
1.
*******
Data given in the table is sample data.
1
Email Id
Alphanumeric
NA
Yes
Gmail account id through which the app would be published on the google play store
2
Password
Alphanumeric
NA
Yes
Password for the Gmail account
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Ask the state team/client to create an email account on Gmail.
Ask the client to log in to the google play console here and make the required payment so that further tasks could be processed.
Ask the client to share the email id and password in the template.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
This checklist covers the activities which are specific to the entity.
1
Make sure that the email account is created on Gmail since the play store works on Google accounts only
-
2
Email Id and Password is required in order to login to the google play store for configuration
-
DIGIT has modules which require the user to pay for the service that he/ she is availing for example property tax, trade license etc. . In order to achieve the functionality, we have a common payment gateway developed which acts a liaison between DIGIT apps and external payment gateways (which depends on the client requirements).
This module facilitates payments and lookup of transaction status.
Following are the details required from the payment gateway vendor in order to configure the payment gateway:
1.
File Name
File Name
XYZ#123
UDDUK
File Name
Data given in the table is a sample data.
1
Integration Kit
Document
NA
Yes
This is a document that is sent by the vendor which contains information on how to integrate the service
2
API Documentation
Document
NA
Yes
This is a separate document which is sent by the vendor in order to help ideally helps us to retrieve the transaction status
3
Redirect Working Key
Alphanumeric
64
Yes
The working key is provided by the vendor for the generation of the redirection URL
4
Merchant Id
Alphanumeric
64
Yes
Merchant id provided by the vendor
5
Test credential of Debit Card/ Net Banking
Document
NA
Yes
These are the details of the debit/credit card or net banking credentials which would help us test the gateway
This contains the card number/Code/Account number etc.
The payment gateway is a vendor oriented service that is integrated with different modules in order to facilitate the transactions. Below mentioned are the steps which are followed:
The client has to finalize a payment gateway vendor (for example PAYU, Paytm, HDFC, AXIS atc.) depending upon the requirements.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
After which the details/ documents mentioned in the template would be provided by the vendor.
These details are to be received separately for both prods as well as UAT.
Get the IP address for UAT and Production environments whitelisted from the vendor.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
This checklist covers the activities which are specific to the entity.
1
While finalizing a payment gateway vendor make sure that the vendor should support transactions into multiple bank accounts based on the key( which would be tenantid)
-
2
Do get the details for both the environments separately i.e UAT and Production
-
The SMS service is a way of communicating necessary information/updates to the users on their various transactions on DIGIT applications.
In order to update the users, there are certain notification parameters that are system configured for various steps in the application process. These configurations can be changed/reconfigured based upon the ULB requirements.
We have the below-mentioned parameters which we use for configuration:
1
sms.provider.url
www.xyz.com
2
sms.username.parameter
mnsbihar@001
3
sms.username.value
***
The data given in the above table is sample data. The parameters and its values are SMS service provided specific and may vary accordingly.
For the SMS service to be integrated there are various things for which the vendor more or less guides us for the steps to be followed but below mentioned are a few basic steps and the generic data definitions which could be followed.
Below mentioned are the descriptions of the parameters which are needed for configuration:
1
Parameter
Alphanumeric
64
Yes
The parameter required to be configured
2
Value
Alphanumeric
64
Yes
The corresponding value of the parameter
Parameter names could differ from vendor to vendor.
Since the SMS service is a vendor delivered service for which the below steps would have to be followed:
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
The SMS vendor has to provide the data in the data template attached.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of.
This checklist covers the activities which are specific to the entity.
1
Make sure that the vendor should support multiple language functionality and especially the local language of the state.
-
SSL is Secure Sockets Layer is an encryption-based network security protocol developed for the assurance of privacy, authenticity and data integrity in internet communications.
Ideally, the domain name configuration and the SSL certification are obtained consecutively without fail from the state’s IT team.
No data is needed from the state team for this.
Not Applicable
Not Applicable
Not Applicable
Not Applicable
Not Applicable
Not Applicable
Key configurations at the state level include -
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Point of Sales (POS) machine is a machine that helps in handling transaction processing. This machine accepts and verifies the payments which are made by citizens for prevailing the services of DIGIT.
POS facilitates a middleware app developed in order to verify the payment process between the DIGIT module and the payment.
In this case, no data is required from the state team.
Not applicable.
Not applicable.
Not applicable.
Not applicable.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This is the 3rd step that comes after the boundary data collection. Cross hierarchy mapping happens in case a child has a relationship with more than 2 parents. This double relationship between the child and parents could happen between different hierarchies as well.
For example: In Admin level boundary hierarchy a mohalla M1(child) could be a part of 2 Wards(parent) W1 and W2. In such a case a single Mohalla(child) has to be mapped to 2 Wards(parent).
Below is the data table for the Boundary:
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Firstly Identify all the child levels which have a relation with more than 2 parent boundary types and their hierarchy types as well.
Fill up the boundary hierarchy (names/ codes) types in place of boundary type 1/2.
Then along with the codes start filling in one by one with the proper mapping between every child and parent.
The Sr. No should be in an incremental order for every new child level.
Prepare a new table for every different parent-child relation.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
This checklist covers the activities which are specific to the entity. There is no entity-specific checklist activity applicable here.
It is a ULB bank account which is operative at least to receive or deposit the day to day revenue collection done by the ULB. It is used by online payment integrator to disburse the amount in ULBs accounts which have been collected through a payment gateway into a pool account managed by the payment gateway.
Below given data table represents the excel template attached. Data given in the table is a sample data.
Data given in the table is sample data.
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning of them by referring 'Data Definition' section.
Make sure all the headers, its data type, field size and its definition/ description is understood properly. In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Identify the bank account which is to be used to transfer the amount which is collected online for various services.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every checklist point/ activity mentioned in the checklist.
The checklist is a set of activities to be performed after the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
This checklist covers the activities which are specific to the entity.
A designation is an act of pointing someone out with a name, a title or an assignment. For example, someone being named president of an organization. This document is to help to gather various designations data which are generally used in ULBs.
Data given in the table is a sample data.
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning given in this document under section 'Data Definition'.
Make sure all the headers, its data type, field size and its definition/ description are understood properly.
In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Identify all the designations exists in the ULB, refer to governments gazette to define the designations in ULBs.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed after the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
This checklist covers the activities which are specific to the entity. There is no entity-specific checklist is applicable for this entity.
ULB level setup involves the configuration of ULB specific data parameters such as ULB boundaries, ULB bank accounts, and hierarchy details.
All content on this page by is licensed under a .
Localization is a practice to localize various UI visible data into the local wordings according to the client's requirements. This practice of localization is enforced on various clients so that it becomes easier for the people using the service to understand the common terminology and make the best use of the available system.
The following texts (but not limited to) on the web page can be localized:
Labels
Messages: Alert messages, success messages, validation messages and other notifications etc.
Help Texts
The module-specific master data would already have been made available in the localized form while collecting the data for the respective module-specific configuration.
Data mentioned in the data table is a sample data.
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Present the client the full sheet of codes as well as the English language for which the localized texts are required.
Ask the client to fill the localized text in the last column which is the message(local language) column.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
This checklist covers all the activities which are common across the entities.
Not Applicable
At times in the different modules, there is a need to capture the address of the user’s place of residence or where the person is doing a trade, for which the user has to enter his/her full address which creates a task. In order to simplify the process, we can have google map geolocation service in place which would help us get the exact coordinates of the place on the map and help us identify the place.
This service is paid and the client has to purchase the below items:
Google Map API's
"Maps Javascript API", "Places API" and "Geolocation API" are needed and first 200$ usages are free, once it exceeds, the price per 1000 requests as given below.
Maps JavaScript API (web-client) Return the location and accuracy radius of a device, based on Wi-Fi or cell towers. $5
Geolocation API Return the location and accuracy radius of a device, based on Wi-Fi or cell towers. $5
Places API for Web (web-server) Turn a phone number, address, or name into a place, and provide its name and address. $17
Note:
The data provided is sample data
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Ask the clients to purchase the above-mentioned APIs in the Introduction section.
Get the details for the API URL and key from the client.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
Not Applicable
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
To see the common checklist refer to the page consisting of all the activities which are to be followed to ensure complete and quality data.
All content on this page by is licensed under a .
To see the common checklist refer to the page consisting of all the activities which are to be followed to ensure complete and quality data.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Sr.No
Boundary Type*
Boundary Code*
Boundary Type*
Boundary Code*
1
Ward
W1
Mohalla
M1
Ward
W2
Mohalla
M1
2
Ward
W3
Mohalla
M2
Ward
W4
Mohalla
M2
1
Hierarchy Type 1
Text
256
Yes
The type of hierarchy 1 the boundary belongs to which is to be mapped with other boundaries in hierarchy 2. Refer Boundary Hierarchies
2
Hierarchy Type 2
Text
256
Yes
The type of hierarchy 2 the boundary belongs to which is to be mapped with other boundaries in hierarchy 1. Refer Boundary Hierarchies
3
Boundary Type
Text
64
Yes
This is the type of boundary from hierarchy 1. Refer Boundary Data
4
Boundary Code
Alphanumeric
64
Yes
This is the code of the boundary for the boundary from hierarchy 1. Refer Boundary Data
5
Boundary Type
Text
64
Yes
This is the type of boundary from hierarchy 2. Refer Boundary Data
6
Boundary Code
Alphanumeric
64
Yes
This is the code of the boundary for the boundary from hierarchy 2. Refer Boundary Data
1
Make sure that each and every point in this reference list has been taken care of
1
dehradun
Dehradun Municipal Corporation
SBI
Rajpur
XXXX0082XX01
Saving
SBIX0921
2
haridwar
Haridwar Municipal Corporation
PNB
Chauk
XXXX9820XX9
Saving
PNBX8320
1
Code
Alphanumeric
64
Yes
Unique code is given to the bank detail record e.g. dehradun
2
ULB Name
Text
256
Yes
Name of Urban Local Body
3
Bank Name
Text
256
Yes
Name of the bank where the account exists
4
Branch Name
Text
256
Yes
Name of the bank branch where the account exists
5
Account Number
Alphanumeric
64
Yes
Bank account number to be used to transfer the amount
6
Account Type
Text
256
Yes
Account type. e.g. Saving, Current etc.
7
IFSC
Alphanumeric
64
Yes
IFS code of branch as per FBI guidelines
Sr. No.
Activity
Example
1
Code should not consist of any special characters
E.g. dehradun is allowed but dehradun@1 is not allowed
2
The account number should not consist of any special characters.
As issued by the bank
1
ACT
Accountant
अकाउंटेंट
2
AO
Accounts Officer
लेखा अधिकारी
3
AC
Additional Commissioner
अपर आयुक्त
1
Designation Code
Alphanumeric
64
Yes
Unique Identifier for designation which is used as a reference for child configuration mapping
2
Designation Name (In English)
Text
256
Yes
Designation name in English
3
Designation Name (In Local Language)
Text
256
Yes
Designation Name in the local language. e.g. Hindi, Telugu etc. whichever is applicable
1
ACTION_TEXT_APPLICATION
Trade License
Search Trade Licenses
व्यापार लाइसेंस खोजें
2
ACTION_TEST_TL_REPORTS
Trade License
Trade License Reports
ट्रेड लाइसेंस रिपोर्ट
3
CORE_COMMON_CITY
Property Tax
City
शहर
1
Code
Alphanumeric
64
Yes
The code for which the localized language is to be provided
2
Module
Alphanumeric
64
Yes
The module in which the code belongs to
3
Message(In English)
Text
256
Yes
The English language that is being displayed on the UI
4
Message(In Local Language)
Text
256
Yes
The text in the local language that the client wants to be displayed
1
Make sure that each and every point in this reference list has been taken care of
1
Make sure that each and every point in this reference list has been taken care of
1458-ASD785-987722
Google API URL
Alphanumeric
64
Yes
The URL of the API that is being purchased
2.
API Key
Alphanumeric
64
Yes
The key which the google would provide after the purchase for the API has been done
A ULB is divided into certain categories of boundaries by ULB administrative authorities in order to carry out ULB’s functions better. A ULB/City could be divided by a different set of delimitation of boundaries based on functions as given below.
Revenue - Delimitation of ULB into boundaries to perform the target setting and collection of revenue.
Administration - Delimitation of ULB into boundaries for the better administration of ULB.
Locality/ Location - Delimitation of ULB into boundaries based on the places known to citizen with names and easily identifiable by the common person.
All these authorities have designated certain levels of boundary classification for a certain ULB.
The below mention table is used to collect data for the types of hierarchy being followed:
1
ADM
Administration
Administration level boundary classified on the basis of administrative functions such as scrutinize certain rules and regulations
2
REV
Revenue
Revenue-based classification of a ULB is done on the basis of revenue collection
3
LOC
Locality
Location-based classification could be done in order to identify a certain place. For example, Locality of a house of a citizen could follow the below hierarchy:
House no.
Mohalla
Area
Ward
City
The above-mentioned data for the boundary hierarchy is sample data.
1
Code
Alphabet
64
Yes
Code is used to identify a certain classification of the type of boundary hierarchy
2
Boundary Hierarchy Type
Alphanumeric
256
Yes
The meaningful name to define one group of boundaries defined to perform one function
3
Description
Alphanumeric
256
Yes
A brief description of the boundary hierarchy
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Identify all the types of boundaries which are being used in the state in order to carry out various administrative/revenue functions.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Then fill up the hierarchy types and the codes in the respective columns in the template.
Code should be created for the type of boundary being classified.
A brief description of the boundary hierarchy type would be helpful.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
This checklist covers the activities which are specific to the entity.
1
Make sure that the hierarchies types should be uniform across all the ULB’s /cities in the state
-
2
Only 3 types of boundary hierarchies are allowed
-
This is the next step after collating all the boundary hierarchies which are being used in the state. In a hierarchy, there are certain types of boundary classification and in all the levels there will be a mapping which we could define as a parent-child mapping in order to link certain levels of the classification.
For example, a hierarchy could be:
Administration Hierarchy: City/ULB → Zone → Ward → Locality
In the above-mentioned hierarchy, a City/ULB is being divided into different into zones followed by zones into wards and at the end wards into the locality.
Data has to be collected for every boundary hierarchy type and boundary type with a mapping between the boundary code and its parent boundary code. Following is the table which is to be used across all the hierarchy types.
1
W1
Ward no.1
वार्ड नंबर 1
Z1
Ward
ADM
2
W2
Ward no.2
वार्ड नंबर 2
Z1
Ward
ADM
3
W3
Ward no.3
वार्ड नंबर 3
Z2
Ward
ADM
4
W4
Ward no.4
वार्ड नंबर 4
Z3
Ward
ADM
Data given in the table is a sample data.
Following is the definition of the data columns which are being used in the template:
1
Boundary Code
Alphanumeric
64
Yes
This is a code for the sub-classification for a particular boundary. Should be unique across all boundaries defined
2
Boundary Name (In English)
Text
256
Yes
The name of the boundary that is being defined in the English language
3
Boundary Name (In Local Language)
Text
256
Yes
The name of the boundary that is being defined in the local language of the state e.g. Telugu, Hindi etc.
4
Parent Boundary Code
Alphanumeric
64
Yes
This is the boundary code of the parent which identifies to which parent the child belongs to
5
Boundary Type
Text
256
Yes
The name of the boundary type i.e. Ward, Zone etc.
6
Hierarchy Type Code
Alphanumeric
64
Yes
Following are the steps which should be used to fill the template:
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
After Identifying all the boundary hierarchy, get the sub-classification of all the hierarchies.
Figure out the codes for all the sub-classification for a particular city/ULB.
Start filling the template from the top of the hierarchy in a drill-down approach.
A parent-child mapping code has to be created for every boundary level except for the top level.
Follow the steps until you reach the last sub-classification.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
This checklist covers the activities which are specific to the entity.
1
Every boundary type of data should be filled separately
-
Master data templates allow users to configure the key parameters and details required for the effective functioning of the modules. This section offers comprehensive information on how to configure the master data templates for each module.
The individual master data templates for specific modules are availed in the Product & Modules section of our docs. Click on the links given below to navigate to view the specific module setup details.
Property Tax Master Data Templates
Trade License Master Data Templates
Water Charges Master Data Templates
Sewerage Charges Master Data Templates
mCollect Master Data Templates
Fire NOC Master Data Templates
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
States and ULBs can configure their web portal to deploy the DIGIT portal effectively. State-level and ULB level web portal configuration details are covered in this section.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Human Resource Management System (HRMS) is a key module, a combination of systems and processes that connect human resource management and information technology through HR software. The HRMS module can be used for candidate recruiting, payroll management, leave approval, succession planning, attendance tracking, career progression, performance reviews, and the overall maintenance of employee information within an organization.
HRMS module enables users to -
Create User Roles
Create System Users
Employee Information Report
All content on this page by is licensed under a .
A user role defines permissions for users to perform a group of tasks. In a default application installation, there are some predefined roles with a predefined set of permissions. Each role has a certain number set of tasks it is allowed to perform and these roles are Super Admin, Trade License Approver, Data Entry Admin and Trade License document verifier etc.
Data given in the table is sample data for reference.
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning of them by referring 'Data Definition' section.
Make sure all the headers, its data type, field size and its definition/ description is understood properly. In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Identify all different types of user roles on the basis of ULB’s functions.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed once the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
This checklist covers the activities which are specific to the entity.
The Billing and Payments module serves the billing requirements of various ULB departments. The module caters to fulfil the demands generated by the revenue collection needs of the business services.
The module enables ULBs to -
Generate bills
Search bills
Update bills
ULB Level
None
All content on this page by is licensed under a .
A system user is a person who uses the application service. A user often has a user account and is identified to the system by a username. A user is a person who accesses a particular application to perform a set of actions.
Each user has a certain number of set tasks, the user would be allowed to perform a task by assigning particular roles which are Super Admin, Trade License Approver, Data Entry Admin and Trade License document verifier etc.
Data given in the table is sample data for reference.
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning of them by referring 'Data Definition' section.
Make sure all the headers, its data type, field size and its definition/ description is understood properly. In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed once the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
This checklist covers the activities which are specific to the entity.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The code of the for which this particular boundary is defined
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
1
TL_APPROVER
TL Approver
Trade License Approver
2
GRO
Grievance Routing Officer
Grievance Routing Officer
3
CSR
Customer Support Representative
An employee who files and follows up complaints on behalf of the citizen
1
Code
Alphanumeric
64
Yes
A unique code that identifies the user role name.
2
Name
Text
256
Yes
The Name indicates the User Role while creating an employee a role can be assigned to an individual employee
3
Description
Text
256
No
A short narration provided to the user role name
1
Make sure that each and every point in this reference list has been taken care of
1
The Code should be alphanumeric and unique
TL_APPROVER, GRO
2
The Name should not contain any special characters
TL Approver : [Allowed]
#TL Approver! : [Not allowed]
1
Pooja
9999999999
Mr.Bala Chandra
FEMALE
22/01/1987
Nagar Nigam Haldwani-PIN CODE-263139
Haldwani
Super User
PERMANENT
Yes
EMPLOYED
REVENUE
City
Haldwani
05/10/2019
Revenue
Tax Inspector
2
M.C. Joshi
9999999999
Late Jai Dutt Joshi
MALE
04/08/1965
Nagar Nigam Haldwani
Haridwar
TL Counter Employee
PERMANENT
Yes
EMPLOYED
REVENUE
City
Haldwani
30/10/2019
Revenue
Tax Collector
1
Name
Text
256
Yes
The Name of his/her to whom the access to the system is provided, so he/she can use the application to perform the role function assigned
2
Mobile Number
Alphanumeric
10
Yes
The Mobile number of his/her to whom the access to an application provided. The mobile number is relevant so in an emergency case the person can be contacted
3
Father/Husband's Name
Text
256
Yes
The Name of the Father/Husband of his/her to whom the access to an application provided. This information is for internal records
4
Gender
Text
64
Yes
The Gender of the individual person. This information is for internal records
5
Date of Birth
Date
10
Yes
The Date of birth of the individual person. This information is for internal records
6
Alphanumeric
256
No
The email id of his/her, this email id is linked to receiving all the official communication from the customers and other counterparts
7
Correspondence Address
Text
256
Yes
The address of his/her, this information is saved for internal records
8
ULB
Text
256
Yes
A ULB to be assigned against the individual employee, So that the assigned role can perform his/her duty within that assigned ULB
9
Role
Text
256
Yes
A Role is a permission for users to perform a group of tasks, a role is assigned to the user to perform a function within the application. A user can be assigned multiple roles. Click User Roles for the Role master Data
10
Employment Type
Text
256
Yes
The employment types indicate the type of contract which he/she hold with the organization. This indicates whether he/she is a permanent employee or a contract employee for short period. The employment type “Permanent”, “Temporary”, “DailyWages” and “Contract” either one should be selected
11
Current Assignment
Text
64
Yes
The current assignment type to indicate whether the employee is currently assigned to a particular department and designation. A user can be also be assigned multiple assignments to perform his/her function
12
Status
Text
256
Yes
The Status indicates the type of status which he/she hold, whether employed or not within the organization
13
Hierarchy
Text
256
Yes
The hierarchy indicates the hierarchy type for the Boundary to which he/she is assigned
14
Boundary Type
Text
256
Yes
The boundary type indicates assigning a city to his/her role within the organization. A user can be assigned multiple Boundary Type to perform in different function. (Example: City, Zone, Block and Locality)
15
Boundary
Text
256
Yes
The boundary indicates assigning a particular city to his/her role wherein they perform role function of the application for the particular city. A user can be assigned multiple Boundary to perform in a different location (Example: City Name and Tenant Zone)
16
Assigned from Date
Date
10
Yes
The assigned from date indicates the date from which his/her role is assigned to perform the role function assigned
17
Department
Text
256
Yes
The Department indicates the particular department to which his/her role is assigned
18
Designation
Text
256
Yes
The designation indicates a particular designation is assigned to his/her role
1
Make sure that each and every point in this reference list has been taken care of
1
The Name should not have any special character
Pooja : [Allowed]
#Pooja! : [Not allowed]
2
The date should be in DD/MM/YYYY format
DD/MM/YYYY : [Allowed]
YYYY/DD/MM : [Not allowed]
3
The Email ID should be valid Id, email Id should contain the Company/Firm name or an individual personal name before the “@” and the “XXXXX.com” after the “@”
Details coming soon!!
Details coming soon!!
A ULB portal is a specially designed website for a ULB that serves as the single point of access for information. It can also be considered a library of personalized and categorized content. A ULB web portal helps in search navigation, personalization, notification and information integration, and often provides features like task management, collaboration, and business intelligence and application integration.
This section tells about the template and table given below represents the template. Full template to fill with the portal content is attached with this page at the last into attachments sections.
1
City Introduction
Kesariya Stupa is a Buddhist stupa in Kesariya, located at a distance of 110 kilometres (68 mi) from Patna, in the Champaran (east) district of Bihar, India. Kesaria Stupa has a circumference of almost 1,400 feet (430 m) and raises to a height of about 104 feet (32 m).
2
Mayor’s Message
It is with immense gratitude to the citizens of Kesaria for reposing their faith in me to serve them as Chairman of Kesaria Nagar Panchayat that I write this message. I shall endeavour to prove that they have made the right choice.
.
.
.
.
22
Contact Us
All details of the contact person should be added under this section.
Data given in the table is a sample data.
This section consists the information about the meaning of each and every section in the template and then how to fill the templates in a few easy steps.
Below table consist of a standard section of any portal. The additional section as required will have to capture as part of customization.
1
ULB Logo
Document
N/A
Yes
Logo of resolution: 80 * 80 pixels of the ULB to be shown on the top of the website.
2
Slider Images
Document
N/A
Yes
Slider images of resolution 1280 * 450 pixels to be shown on the website.
3
City Introduction
Text
N/A
Yes
This section talks about the city hence introducing the city to be filled here to display it to the final audience/traffic onto the portal
4
City Map
Document
N/A
Yes
This section will have a map for the city mainly the area which the municipality/ panchayat takes care of and which indicates ULB boundary
5
Public Utility Services
Template
N/A
Yes
This section should include the infrastructure services provided to the citizen. E.g. Public Toilet, Govt School, Temples managed by Municipal Corporations/ Nagar Palika/ Panchayat etc.
6
Tourist Locations
Template
N/A
Yes
All tourist places in the city should be captured under this section. Tourist locations with pictures and other relevant information should be captured here
7
Mayor’s Message
Template
N/A
Yes
Message from the ULB chairman needs to be updated under this section
8
Commissioner’s Message
Template
N/A
Yes
Message from the ULB’s EO/commissioner needs to be updated under this section
9
ULB News
Template
N/A
Yes
Under this section we have will add current news about the ULB
10
ULB Events
Template
N/A
Yes
Under this section, we will add the Ongoing and Upcoming Events by the ULB
11
Recruitment Listing
Template
N/A
Yes
Recruitment listing/vacancies within the ULB needs to be mentioned in this section
12
Projects Info
Template
N/A
Yes
The description of the govt. Projects which ULBs take care of needs to be updated here with all other relevant details
13
Recent Announcements
Template
N/A
Yes
Any kind of announcements with title and description which are in public interest needs to be uploaded under this section
14
Home screen flash Announcement
Template
N/A
Yes
Any kind of announcements with title and description and by highlighting link which are in public interests can bee added under this section
15
Public Notice
Template
N/A
Yes
The notices announced by the ULB for the citizens with description, Rule and Regulation and timelines
16
Government Resolutions
Template
N/A
Yes
Directions, resolutions, and other legal instruction and acts issued by the department should be captured here
17
RTI listing
Template
N/A
Yes
All the RTI received by the ULBs shall be listed under this section
18
Help Documents for Online Services
Template
N/A
Yes
Under this section, we will add the Document or link with the title of online services for citizens
19
Required documents list for Online Services
Template
N/A
Yes
This section tells us about the list of required documents and data like old receipts or old transaction no for each service
20
Forms for services
Template
N/A
Yes
This section tells us about the services are not online, offline forms can be uploaded for the users to download
21
Tender Listing
Template
N/A
NO
All the tender issued by the ULB needs to be added under this section
22
Contact Us
Template
N/A
Yes
All details of the contact person should be added under this section
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning given in this document under section 'Data Definition'.
Make sure all the headers, its data type, field size and its definition/ description are understood properly.
In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of.
This checklist covers all the activities which are specific to the entity.
1
All the sections with data type ‘Template’, data to be filled into the section-wise template provided as an attachment
NA
Bill format can be configured on a module level. Few components on the DIGIT sample bill can be configured on a state level and few at ULB level. Components that can be changed on a module level can be categorized as mentioned:
Important messages: Values can be configured on a module level - state level
1
Water Charges
Important messages
5% rebate to be given on advance payment on the bills
Data given in the table is sample data for reference.
1
Category
Text
64
Yes
To list out the components on the bill, every particular can be grouped into a category
2
Particulars
Alphanumeric
256
Yes
Each category can have multiple entries under it, ie particulars
3
Business
Text
64
Yes
The business for which the Bill format is to be configured
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Get information about the bill format followed by state
Classify the components on the bill and place it under any category
Map the particulars under each category with DIGIT sample bill
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
Entity Specific Checklist is not required separately.
Tax is levied by the government in certain brackets, i.e there are certain components of a tax which sum up and make the final trans-actionable amount. For example, a property tax could have swatch-ta tax, fire cess and certain other components which sum up and make a final amount.
1
PT_UNIT_PENALTY
PT
Penalty
PT Penalty
FALSE
FALSE
1
2
PT_UNIT_EXEMPTION
PT
Exemption
PT Exemption
TRUE
TRUE
2
Data given in the table is sample data for reference.
1.
Code
Alphanumeric
64
Yes
The code for the tax that is being levied
2.
Service
Text
256
Yes
This is the module or the name of the service for which the tax head is being mentioned
3.
Category
Text
256
Yes
The category to which the tax head belongs such as Penalty or exemption or cess
4.
Name
Text
256
Yes
This is the name/description of the tax head
5.
Is Debit
Text
NA
Yes
In case the tax head is an amount that needs to be added up to the property tax, then this needs to be TRUE else FALSE
6.
Is Actual Demand
Text
NA
Yes
In case the tax head is an amount that needs to be subtracted from the property tax, then this needs to be TRUE else FALSE
7
Order
Integer
5
Yes
The order in which the mentioned tax head should appear on the screen
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Get all the tax heads for a particular module and then proceed to the next module.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed on the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
Not Applicable
Key Performance Indicators(KPI) are a way of showing certain insights from the data available which would help the key management authorities to take important business decisions in order to improve the business, enhance the business process and help the people improve the way of functioning. This exercise largely becomes dependent on the data.
The insight could be shown in various available forms such as line graph, bar graph or a tabular format.
Sr. No
Module*
KPI Chart Type*
Description*
PGR
Line Chart
Showing the status of closed complaints over a year month-wise
Pie Chart
Showing the various type of complaints
Metric
Showing the rate of different complaint status by percentage in a tabular format
2.
Property Tax
Horizontal Bar Graph
Showing the various information about property application status month-wise over a year
1
Module Name
Text
256
Yes
The name of the module for which the KPI chart types have to be defined
2
KPI Chart Type
Text
256
Yes
The type of chart which has to display information
3
Description
Text
256
Yes
A brief description of the information that the chart has to display. Steps to fill Data
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Present the client with information about various available chart types.
Show the client how the various KPI’s will look on the web page by showing the reference page from the attachments.
After which the gather the information for various chart types and the information that the chart types have to display in the description column.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
This checklist covers the activities which are specific to the entity:
Make sure that the chart types are chosen from the list of available chart types from the attachment section
-
State Portal is a website for the state. Any content or information which is displayed on this site needs to be provided by the State.
This document is to define a template to collect the portal content and information. And to help in filling up the content into the template.
This section talks about the template and the table given below represents the template. Full template to fill with the portal content is attached with this page at the last into attachments sections.
Sr. No
Section Name
Section Content
1
Government Logo
2
Chief Minister Message
.
.
.
.
20
About Website
This section consists of the information about the meaning of each and every section in the template and then how to fill the templates in a few easy steps.
Below table consist of a standard section of any portal. The additional section as required will have to capture as part of customization.
Sr. No.
Section Name
Data Type
Data Size
Is Mandatory?
Description / Definition
1
Government Logo
Document
N/A
Yes
Resolution: 80 * 80 pixels
Logo of the state to be updated on the website
2
Governor’s Message
Template
N/A
Yes
Message from the governor of the state to the citizens needs to be updated under this section
3
Chief Minister Message
Template
N/A
Yes
Message from the chief minister needs to be updated under this section
4
State News
Template
N/A
Yes
Under this section we have will add current news about the state
5
State Events
Template
N/A
Yes
Under this section, we will add the Ongoing and Upcoming Events in the state
6
Recruitment Listing
Template
N/A
Yes
Recruitment listing/vacancies within the state need to be mentioned in this section
7
Tender Listing
Template
N/A
Yes
All the tender issued by the state government needs to be added under this section
8
Project Info
Template
N/A
Yes
All the Information of upcoming or ongoing project within the state should be added under this section
9
Recent Announcement
Template
N/A
Yes
Any kind of announcements by the state government with title and description which are in public interest needs to be uploaded under this section
10
Home Screen Flash Announcement
Template
N/A
Yes
Any kind of announcements by the state government with title and description and by highlighting link which are in public interests can bee added under this section
11
Public Notice
Template
N/A
Yes
The notices announced by the state government for the citizens with description, Rule and Regulation and timelines
12
Government Resolution
Template
N/A
Yes
Directions, resolutions, and other legal instruction and acts issued by the department should be captured here
13
RTI Listing
Template
N/A
Yes
All the RTI received by the state government shall be listed under this section
14
Help Document for Online services
Template
N/A
Yes
Under this section, we will add the Document or link with the title of online services for citizens
15
Required documents list for Online Services
Template
N/A
Yes
This section tells us about the list of required documents and data like old receipt or old transaction no for each service
16
Forms for services
Template
N/A
Yes
This section tells us about the services are not online, offline forms can be uploaded for the users to download
17
Contact Us
Template
N/A
Yes
All details of the contact person should be added under this section
18
List of ULBs (links to the ULB sites)
Template
N/A
Yes
All website Link of ULBs within the state should be added under this section
19
About Website
Template
N/A
Yes
This section talks about the all over details whatever is there on the state website
20
Tourist Places
Template
N/A
Yes
Under this section, we will add all the tourist place in the state with details and images
21
Slider Images
Document
N/A
Yes
Slider images of resolution 1280 * 450 pixels to be shown on the website
22
State Map
Document
N/A
Yes
This section will have a map for the State
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning given in this document under section 'Data Definition'.
Make sure all the headers, its data type, field size and its definition/ description are understood properly.
In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed one the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
1
All the sections with data type ‘Template’, data to be filled into the section-wise template provided as an attachment
NA
Tax is levied by the government in certain brackets, i.e there are certain components of a tax which sum up and make the final trans-actionable amount. For example, a property tax could have swatch-ta tax, fire cess and certain other components which sum up and make a final amount.
1
PT_UNIT_PENALTY
PT
Penalty
PT Penalty
FALSE
FALSE
1
2
PT_UNIT_EXEMPTION
PT
Exemption
PT Exemption
TRUE
TRUE
2
Data given in the table is sample data for reference.
1.
Code
Alphanumeric
64
Yes
The code for the tax that is being levied
2.
Service
Text
256
Yes
This is the module or the name of the service for which the tax head is being mentioned
3.
Category
Text
256
Yes
The category to which the tax head belongs such as Penalty or exemption or cess
4.
Name
Text
256
Yes
This is the name/description of the tax head
5.
Is Debit
Text
NA
Yes
In case the tax head is an amount that needs to be added up to the property tax, then this needs to be TRUE else FALSE
6.
Is Actual Demand
Text
NA
Yes
In case the tax head is an amount that needs to be subtracted from the property tax, then this needs to be TRUE else FALSE
7
Order
Integer
5
Yes
The order in which the mentioned tax head should appear on the screen
Download the data template attached to this page.
Get a good understanding of all the headers in the template sheet, their data type, size, and definitions by referring to the ‘Data Definition’ section of this document.
In case of any doubt, please reach out to the person who has shared this template with you to discuss and clear your doubts.
Get all the tax heads for a particular module and then proceed to the next module.
Verify the data once again by going through the checklist and making sure that each and every point mentioned in the checklist is covered.
The checklist is a set of activities to be performed on the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
Not Applicable
Document list represents the documents which are needed to avail a service across the modules. Mostly these are standard documents issued by various government departments.
1
RC
Ration Card
ID Proof, Address Proof
This document is used as proof of identity if presented with photo and address proof.
2
AC
Aadhar Card
ID Proof, Address Proof
This document is used as proof of identity as well as address.
3
DL
Driving License
ID Proof, Address Proof
This document is used as proof of identity as well as address.
4
VC
Voter ID Card
ID Proof, Address Proof
This document is used as proof of identity as well as address.
5
PS
Passport
ID Proof, Address Proof
This document is used as proof of identity as well as address.
6
AL
Arms License
ID Proof, Address Proof
This document is used as proof of identity as well as address.
7
CC
Cast Certificate
ID Proof, Address Proof
This document is used as proof of identity if presented with photo and address proof.
8
DC
Domicile Certificate
ID Proof, Address Proof
This document is used as proof of identity if presented with photo and address proof.
9
PC
PAN Card
ID Proof
This document is used as proof of identity.
10
EB
Electricity Bill
Address Proof
This document is used as proof of address only.
11
TB
Telephone Bill
Address Proof
This document is used as proof of address only.
12
WB
Water Bill
Address Proof
This document is used as proof of address only.
13
RSA
Registered Sale Agreement
Address Proof
This document is used as proof of address only.
14
RLA
Registered Lease Agreement
Address Proof
This document is used as proof of address only.
15
VRC
Vehicle Registration Certificate
Address Proof
This document is used as proof of address only.
16
IAO
Income Tax Assessment Order
Address Proof
This document is used as proof of address only.
17
HT
House Tax Slip
Others
These are the documents which are specifically needed to avail a service.
18
FL
Food License
Others
These are the documents which are specifically needed to avail a service.
19
LL
Liquor Licence
Others
These are the documents which are specifically needed to avail a service.
20
GST
GST Registration
Others
These are the documents which are specifically needed to avail a service.
Workflow action defined as an activity which is performed by a workflow user on a service request/ application during the workflow. All the workflow actions are predefined and performed a well-defined job once performed.
In its nature actions are not configurable, only the localization of actions is permissible as a configuration.
1
Initiate
The action will start the application for citizen and CEMP
Trade Licenses, Property Tax, Building Plan Approval
2
Edit
Using this action the application can be opened in editable form and any changes can be performed
Trade Licenses, Property Tax, Building Plan Approval
3
Submit
This action will freeze the application from citizen or CEMP and proceed further for workflow
Trade Licenses, Property Tax, Building Plan Approval
4
Verify and Forward
This action will proceed application to the next stage of the workflow process and also assigns tasks to the next user in the workflow (if needed)
Trade Licenses, Property Tax, Building Plan Approval
5
Pay
This action will help to pay application fees
Trade Licenses, Property Tax, Building Plan Approval
6
Approve
This action will be the last stage of application workflow which will grant permission for a specific application
Trade Licenses, Property Tax, Building Plan Approval
7
Activate connection
This action will create a consumer no. against the application and demand generation can start
Water and Sewerage Charges
8
Reject
This action will reject the application, the application rejected can’t be processed further or with the help of it, citizens can not re-apply. He has to start a new application next time.
Trade Licenses, Property Tax, Building Plan Approval
9
Send Back
An actor can assign back the application to the previous state if any edits/changes are required
Trade Licenses, Property Tax, Building Plan Approval
10
Send Back to Citizen
An actor can assign back the application to the citizen if any edits/changes are required
Trade Licenses, Property Tax, Building Plan Approval
11
View
Anyone in the workflow can view the application and task details
Trade Licenses, Property Tax, Building Plan Approval
12
Comment
Comments can be recorded before any action is taken which can change the state of the application
Trade Licenses, Property Tax, Building Plan Approval
13
Download/ Print
Download/Print of any artefacts can be configured as per the requirement for application processing
All Modules
14
Forward
This action will not create any bill but will be forwarded to the next level review and approve
Finance
15
Create and Approve
In this action, the user who initiates the action can create and approve the bill. (Here there should a threshold amount to be set up)
Finance
16
Save
In this action, the approver can Save the bill before it is approved or rejected
Finance
17
Verify and Approve
This action will help the approver to approve the bill if he/she feels all the information is updated correctly
Finance
18
Reject
This action will help the approver to reject the bill if he/she feels all the information is not correct and may need further clarification.
Finance
19
Send back to Assistant
This action will help to send back the notification on the bill is rejected from the approver
Finance
20
Cancel
This action will help the approver to cancel the bill if he/she feels that the bill need to be rejected
Finance
Actions are standard and are not configurable, hence the template, data definition and standard procedure to fill the template are not needed. This page is created to provide the information and helping the defined workflow process.
Not applicable
Not applicable
Not applicable
Not applicable
The workflow process is a set of steps through which information flows in sequence and the workflow roles which derives the actors are assigned to a step to complete the work defined for that level. The states of each level are derived based on the information received from the previous step.
1
Create and Approve
Approved
Assistant
NA
2
Forward
Pending for Approval
Assistant
NA
3
Pending for Approval
Verify and Approve
Approved
Supervisor
NA
4
Pending for Approval
Save
Pending for Approval
Supervisor
NA
5
Pending for Approval
Reject
Rejected
Supervisor
NA
6
Pending for Approval
Send Back to Assistant
Rejected for Review
Supervisor
NA
7
Rejected for Review
Forward
Pending for Approval
Assistant
NA
8
Rejected for Review
Cancel
Rejected
Assistant
NA
Data given in the above table is sample data.
1
Current State
Text
256
Yes
The Current State indicates the stage at which the process of the workflow in progress
2
Action
Reference
64
Yes
The Action indicates the activity that can be performed at the respective stage in the workflow. This refers to Workflow Actions
3
Next State
Text
256
Yes
The Next State is the state in the workflow that gets updated to in the respective stage on performing the action. (Example: assigning the for approval from one person to next person)
4
Role Name
Reference
64
Yes
The role is the different hierarchy of people with designation who are authorized to initiate, approval or rejecting the process. It refers to Workflow Levels
5
SLA
Integer
2
No
The SLA indicates the time-frame within which the action to be completed
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning of them by referring 'Data Definition' section.
Make sure all the headers, its data type, field size and its definition/ description is understood properly. In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed once the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
1
Make sure that each and every point in this reference list has been taken care of
Please discuss with a relevant department head before finalizing the workflow.
Q. What if mandatory field value is not available and not filled?
The mandatory field value is must to provide, without having those filled into template data can not be accepted.
Q. What if non-mandatory field value is not available and not filled?
It is fine with not providing the non-mandatory field values. These are the fields which are nice to have.
Q. What if the codes are not readily available for the records?
Code is must to provided. In case, code is not readily available simple sequencing of numbers can be used as codes.
Q. What if the definition of column header is not clear?
Contact with the person who has shared the template with you.
Q. Can the order of the columns be changed while filling the data?
Order of columns must remain intact and should not be altered.
Q. What if the entities which are supposed to be defined at the state level but can not be defined?
In a case where the entities which are suggested to define at the state level does not work. Then those can be defined at ULB level but then again can not be moved to state-level once configured.
Q. What are the benefits of defining state level?
The benefits of defining the entity at the state level are given below.
Decision Support System - State level definition and consolidation of data makes data analysis and decision making easy.
Maintenance of such data is easy and correction can be performed quickly.
Avoids the data duplicacy in the configuration for those values which are most common across the ULBs.
Support standardization of processes rules across the ULBs.
A workflow process is a series of sequential tasks that are carried out based on user-defined rules or conditions, to execute a business process. It is a collection of data, rules, and tasks that need to be completed to achieve a certain business outcome.
In DIGIT, workflow for a business process is divided into three units out of which two are completely configurable while the remaining is fixed and lays the foundation of the other two.
This is the first unit which defines the actions and its nature which are basically executed during the workflow process by the workflow actors. It plays the foundation and configurable in nature as per the ground needs.
This is the second unit which defines the number of steps a workflow process may have and then trigger the creation role for each and every step with appropriate rights to perform a set of actions at each step. It is completely configurable.
This is the third unit which defines the workflow process including the steps, roles with actions and the present, next and previous state of a step/level of the workflow process. It is completely configurable.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The Decision Support System in DIGIT platform can be configured to provide customized insights and statistics on the dashboard. This section offers information on how to configure the DSS parameters for maximized efficiency.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The common configuration details required for all modules are available in this section. Refer to the list of standard documents required for processing applications across modules, learn which documents are mapped to which services, find the standard checklist for filling master data templates, and get the answers to common configuration FAQs.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The checklist is the set of activities which are there to perform on completion of a task to ensure the fullness and quality of the task.
The data type is an attribute of data, is a particular kind of data item, as defined by the values it can take, the computer system used, or the operations that can be performed on it. In order to help to fill the right kind of data for a data field/ column in excel, the below-given table has different data types with its description.
Workflow levels are defined for a service with Rights/Role to perform a set of Workflow Actions. There would one or more than one levels involved in a workflow process. This page helps to understand and then define all the levels with its job description and fill in a standard template.
Data given in the above table is sample data.
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning of them by referring 'Data Definition' section.
Make sure all the headers, its data type, field size and its definition/ description is understood properly. In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Identify all different types of services on the basis of ULB’s functions to create a workflow.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed once the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
Please discuss with a relevant department head before finalizing the workflow.
Core Services is one of the key DIGIT components. Browse through this section to learn more about the key configuration and integration details of these core services.
This section contains the configuration documents related to the DIGIT service stack.
Click on the respective service link below to find its configuration details and additional information resources.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Sr. No.
Data Type
Definition/ Description
1
Alphanumeric
It contains alphabets and numbers. And generally used to define the code
2
Decimal
Floating point number with a fraction value up to 2 decimal places
3
Integer
Whole number without having a fraction part in it
4
Text
A string of alphabets, numbers, spaces, and symbols
5
Date
It represents a date and is captured in the format of ‘DD/MM/YYYY’
6
Reference
It is a code of a record from the referred entity and having a related record in the prevailing entity
7
Document
It represents a document which is needed as an attachment with other relevant details in the template
1
The entity is to be decided to be defined at the state level and all the ULBs are agreed on the same.
NA
2
Data filled into templates should cater to the needs of each and every ULBs.
NA
3
Order of headers should remain unchanged in the template while filling the data.
NA
4
Value filled into the template doesn’t exceed the given data size limit.
NA
5
Codes filled in the template for all the records are unique. It means no 2 records in the template share the same code.
Below records in code, value pair is not acceptable.
RES - Residential
RES - Non-residential
6
All the columns marked with an asterisk must be filled with the values and not even a single record left without a value.
NA
7
Reference value in the template must also exist in the referred entity template. A value without being present in the referred entity template is invalid.
NA
8
None of the values filled in the template should have a character which is not allowed.
NA
9
Mobile Numbers filled into the template must be 10 digit valid mobile numbers without country code.
NA
10
Email id filled into template should be a valid email ID.
NA
11
Local language values should be Unicode charset only.
NA
12
Values of data type alphanumeric consist of the alphabet and numeric values only. All the entity code should follow this.
Allowed - ABC01
Not allowed - ABC#01
13
Values of data type decimal must be a number having fraction part up to 2 places of decimal.
Allowed - 23.87
Not allowed - 12.0982
14
Values of data type integer must be a number which is not a fraction but a whole number.
Allowed - 15
Not allowed - 12.01
15
Values of data type text must be a string of alphabets, numbers, special characters, and spaces.
NA
16
Values of data type date must be a date in the format ‘DD/MM/YYYY’. Here DD means day, MM means month and YYYY means years.
Allowed - 31/12/2019
Not allowed - 12/31/2019, 31/12/19, etc.
17
Values of data type reference must be a reference to another entity referring to a value in that entity. Only the code of the referred record from the referred entity is provided as a value here.
NA
18
Value of the data type document must be a document which is to be provided separately as an attachment while submitting the data along with a filled data template.
NA
1
Finance
Bill Accounting
Level 1
Create Bill
Accounts Clerk
2
Finance
Bill Accounting
Level 2
Create and Approve
Accounts Clerk
3
Finance
Bill Accounting
Level 3
Forward for Approval
Chief Accountant
4
Finance
Bill Accounting
Level 4
Verify the Bill
Chief Accountant
5
Finance
Bill Accounting
Level 5
Approval
Approver
1
Module
Text
64
Yes
The module indicates for which the user would be mapped for a specific module to perform the action
2
Service
Text
64
Yes
The service indicates the type of process which the user performs in a particular module
3
Workflow Level
Integer
2
Yes
The workflow level indicates when a process has executed the level at which the flow of the process in progress
4
Task
Text
64
Yes
The task refers to which state the action is in progress during the workflow
5
Job Description
Text
256
Yes
A short description provided for the role (Example: Designation of the Role)
1
Make sure that each and every point in this reference list has been taken care of
Workflows are a series of steps that moves a process from one state to another state by actions performed by different kind of Actors - Humans, Machines, Time based events etc. to achieve a goal like onboarding an employee, or approve an application or grant a resource etc. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has workflow persister config path added in it
PSQL server is running and database is created to store workflow configuration and data
Always allow anyone with a role in the workflow state machine to view the workflow instances and comment on it
On the creation of workflow, it will appear in the inbox of all employees that have roles that can perform any state transitioning actions in this state.
Once an instance is marked to an individual employee it will appear only in that employee's inbox although point 1 will still hold true and all others participating in the workflow can still search it and act if they have necessary action available to them
If the instance is marked to a person who cannot perform any state transitioning action, they can still comment/upload and mark to anyone else.
Overall SLA: SLA for the complete processing of the application/Entity
State-level SLA: SLA for a particular state in the workflow
Environment Variables
Description
egov.wf.default.offset
The default value of offset in search
egov.wf.default.limit
The default value of limit in search
egov.wf.max.limit
The maximum number of records that are returned in search response
egov.wf.inbox.assignedonly
Boolean flag if set to true default search will return records assigned to the user only, if false it will return all the records based on the user’s role. (default search is the search call when no query params are sent and based on the RequestInfo of the call, records are returned, it’s used to show applications in employee inbox)
egov.wf.statelevel
Boolean flag set to true if a state-level workflow is required
Deploy the latest version of egov-workflow-v2 service
Add businessService persister yaml path in persister configuration
Add Role-Action mapping for BusinessService API’s
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Create businessService (workflow configuration) according to product requirements
Add Role-Action mapping for /processInstance/_search API
Add workflow persister yaml path in persister configuration
For Configuration details please refer to the links in Reference Docs
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient too and increase accountability.
Role-based workflow
An easy way of writing rule
File movement within workflow roles
To integrate, host of egov-workflow-v2 should be overwritten in helm chart
/process/_search should be added as the search endpoint for searching workflow process Instance object.
/process/_transition should be added to perform an action on an application. (It’s for internal use in modules and should not be added in Role-Action mapping)
The workflow configuration can be fetched by calling _search API to check if data can be updated or not in the current state
Title
Link
Configuring Workflows For New Product/Entity
Setting Up Workflows
API Swagger Documentation
Migration to Workflow 2.0
Title
Link
/businessservice/_create
/businessservice/_update
/businessservice/_search
/process/_transition
/process/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
DIGIT is API based Platform here each API is denoting to a DIGIT resource. Access Control Service (ACS) primary job is to authorise end-user based on their roles and provide access to the DIGIT platform resources. Access control functionality basically works based on below points:
Actions: Actions are events which are performed by a user. This can be an API end-point or Frontend event. This is MDMS master
Roles: Role are assigned to the user, a user can hold multiple roles. Roles are defined in MDMS masters.
Role-Action: Role actions are mapping b/w Actions and Roles. Based on role, action mapping access control service identifies applicable action for the role.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
MDMS service is up and running
Serve the applicable actions for a user based on user role (To print menu three).
On each action which is performed by a user, access control looks at the roles for the user and validate actions mapping with the role.
Support tenant-level role-action. For instance, an employee from Amritsar can have a role of APPROVER for other ULB like Jalandhar and hence will be authorised to act as APPROVER in Jalandhar.
Deploy the latest version of Access Control Service
Deploy MDMS service to fetch the Role Action Mappings
Define the roles
Add the Actions (URL)
Add the role action mapping
(The details about the fields in the configuration can be found in the swagger contract)
Any microservice which requires authorisation can leverage the functionalities provided by access control service.
Any new microservice that is to be added in the platform won’t have to worry about authorisation. It can just add it’s role action mapping in the master data and Access Control Service will perform authorisation whenever API for the microservice is called.
To integrate with Access Control Service the role action mapping has to be configured(added) in the MDMS service.
The service needs to call /actions/_authorize API of Access Control Service to check for authorisation of any request
Title
Link
API Contract
Title
Link
User service is responsible for user data management and providing functionality to login and logout into Digit system
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
Encryption and MDMS services are running
PSQL server is running and database
Redis is running
Store, update and search user data
Provide authentication
Provide login, logout functionality into DIGIT platform
Store user data PIIs in encrypted form
Setup latest version of egov-enc-service and egov-mdms- service
Deploy the latest version of egov-user service
Add Role-Action mapping for API’s
Following application properties file in user service are configurable.
Property
Value
Remarks
egov.user.search.default.size
10
default search record number limit
citizen.login.password.otp.enabled
true
whether citizen login otp based
employee.login.password.otp.enabled
false
whether employee login otp based
citizen.login.password.otp.fixed.value
123456
fixed otp for citizen
citizen.login.password.otp.fixed.enabled
false
allow fixed otp for citizen
otp.validation.register.mandatory
true
whether otp compulsory for registration
access.token.validity.in.minutes
10080
validity time of access token
refresh.token.validity.in.minutes
20160
validity time of refresh token
default.password.expiry.in.days
90
expiry date of a password
account.unlock.cool.down.period.minutes
60
unlock time
max.invalid.login.attempts.period.minutes
30
window size for counting attempts for lock
max.invalid.login.attempts
5
max failed login attempts before account is locked
egov.state.level.tenant.id
pb
User data management and functionality to login and logout into Digit system using OTP and password.
Providing following functionality to citizen and employee type users
Employee:
User registration
Search user
Update user details
Forgot password
Change password
User role mapping(Single ULB to multiple roles)
Enable employee to login into DIGIT system based on a password.
Citizen:
Create user
Update user
Search user
User registration using OTP
OTP based login
To integrate, host of egov-user should be overwritten in the helm chart.
Use /citizen/_create and /users/_createnovalidate endpoints for creating users into the system
Use /v1/_search and /_search endpoints to search users in the system depending on various search parameters
Use /profile/_update for partial update and /users/_updatenovalidate for update
Use /password/nologin/_update for otp based password reset and /password/_update for logged in user password reset
Use /user/oauth/token for generating token, /_logoutfor logout and /_details for getting user information from his token
Link
/citizen/_create
/users/_createnovalidate
/_search
/v1/_search
/_details
/users/_updatenovalidate
/profile/_update
/password/_update
/password/nologin/_update
/_logout
/user/oauth/token
The objective of PDF generation service is to bulk generate pdf as per requirement.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Install npm.
Kafka server is up and running.
egov-persister service is running and has pdf generation persister config path added in it.
PSQL server is running and the database is created to store filestore id and job id of generated pdf.
Provide a common framework to generate PDF.
Provide flexibility to customise the PDF as per the requirement.
Provide functionality to add an image, Qr Code in PDF.
Provide functionality to generate pdf in bulk.
Provide functionality to specify a maximum number of records to be written in one PDF.
Environment Variables
Description
MAX_NUMBER_PAGES
Maximum number of records to be written in one PDF
DATE_TIMEZONE
Date timezone which will be used to convert epoch timestamp into date (DD/MM/YYYY)
DEFAULT_LOCALISATION_LOCALE
Default value of localisation locale
DEFAULT_LOCALISATION_TENANT
Default value of localisation tenant
DATA_CONFIG_URLS
File path/URL'S of data config
FORMAT_CONFIG_URLS
File path/URL'S of format config
Mustache.js: (https://github.com/janl/mustache.js/ ):- as templating engine to populate format as defined in format config, from request json based on mappings defined in data config
Create data config and format config for a PDF according to product requirement.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment yml file
Deploy the latest version of pdf-service in a particular environment.
For Configuration details please refer to Customizing PDF Receipts & Certificates.
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be print/downloaded by the user.
Functionality to generate PDFs in bulk.
Avoid regeneration.
Support QR codes and Images.
Functionality to specify the maximum number of records to be written in one PDF.
Uploading generated PDF to filestore and return filestore id for easy access.
To download and print the required PDF _create API has to be called with the required key (For Integration with UI, please refer to the links in Reference Docs)
Title
Link
Customizing PDF Receipts & Certificates
Steps for Integration of PDF in UI for download and print PDF
API Swagger Documentation
Link
pdf-service/v1/_create
pdf-service/v1/_createnosave
pdf-service/v1/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
A core application which provides location details of the tenant for which the services are being provided.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
PSQL server is running and database is created
Knowledge of egov-mdms service
egov-mdms service is running and all the required mdms master are loaded in it
The location information is also known as boundary data of ULB
Boundary data can be of different hierarchies ADMIN, ELECTION hierarchy which is defined by the Administrators, Revenue hierarchy defined by the Revenue department.
The election hierarchy has the locations divided into several types like zone, election ward, block, street and locality. The Revenue hierarchy has the locations divided into a zone, ward, block and locality.
The model which defines the localities like zone, ward and etc is boundary object which contains information like name, lat, long, parent or children boundary if any. The boundaries come under each other in a hierarchy like a zone contains wards, ward contains blocks, a block contains locality. The order in which the boundaries are contained in each other will differ based on the tenants.
Environment Variables
Description
egov.services.egov_mdms.hostname
Host name for MDMS service.
egov.services.egov_mdms.searchpath
MDMS Search URL.
egov.service.egov.mdms.moduleName
MDMS module which contain boundary master.
egov.service.egov.mdms.masterName
MDMS master file which contain boundary detail.
Add/Update the mdms master file which contain boundary data of ULB’s.
Add Role-Action mapping for egov-location API’s.
Deploy/Redeploy the latest version of egov-mdms service.
Fill the above environment variables in egov-location with proper values.
Deploy the latest version of egov-location service.
The boundary data has been moved to mdms from the master tables in DB. The location service fetches the JSON from mdms and parses it to the structure of boundary object as mentioned above. A sample master would look like below.
Attribute Name
Description
tenantId
The tenantId (ULB code) for which the boundary data configuration is defined.
moduleName
The name of the module where TenantBoundary master is present.
TenantBoundary.hierarchyType.code
Unique code of the hierarchy type.
TenantBoundary.hierarchyType.name
Unique name of the hierarchy type.
TenantBoundary.boundary.id
Id of boundary defined for particular hierarchy.
boundaryNum
Sequence number of boundary attribute defined for the particular hierarchy.
name
Name of the boundary like Block 1 or Zone 1 or City name.
localname
Local name of the boundary.
longitude
Longitude of the boundary.
latitude
Latitude of the boundary.
label
Label of the boundary.
code
Code of the boundary.
children
Details of its sub-boundaries.
The egov-location API’s can be used by any module which needs to store the location details of the tenant.
Get the boundary details based on boundary type and hierarchy type within the tenant boundary structure.
Get the geographical boundaries by providing appropriate GeoJson.
Get the tenant list in the given latitude and longitude.
To integrate, host of egov-location should be overwritten in helm chart.
/boundarys/_search should be added as the search endpoint for searching boundary details based on tenant Id, Boundary Type, Hierarchy Type etc.
/geography/_search should be added as the search endpoint .This method handles all requests related to geographical boundaries by providing appropriate GeoJson and other associated data based on tenantId or lat/long etc.
/tenant/_search should be added as the search endpoint. This method tries to resolve a given lat, long to a corresponding tenant, provided there exists a mapping between the reverse geocoded city to tenant.
The MDMS Tenant boundary master file should be loaded in MDMS service.
Title
Link
Local setup
Link
/boundarys/_search
/geography/_search
/tenant/_search
Please refer to the Swagger API contract for egov-location service to understand the structure of APIs and to have a visualisation of all internal APIs.
One of the applications in the Digit core group of services aims to reduce the time spent by developers on writing codes to store and fetch master data ( primary data needed for module functionality ) which doesn’t have any business logic associated with them. Instead of writing APIs, creating tables in every different service to store and retrieve data that is seldom changed MDMS service keeps them at a single location for all modules and provides data on will with the help of no more than three lines of configuration.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge on how to operate JSON data would be an added advantage to understand the service.
Adds master data for usage without the need to create master data APIs in every module.
Reads data from GIT directly with no dependency on any database services.
Environment Variables
Description
egov.mdms.conf.path
The default value of folder where master data files are stored
masters.config.url
The default value of the file URL which contains master-config values
Deploy the latest version of Mdms-service
Add conf path for the file location
Add master config JSON path
The MDMS service provides ease of access to master data for any service.
No time spent writing repetitive codes with no business logic.
To integrate, host of egov-mdms-service should be overwritten in helm chart
egov-mdms-service/v1/_search should be added as the search endpoint for searching master data.
Mdms client from eGov snapshots should be added as mvn entity in pom.xml for ease of access since it provides mdms request pojos.
egov-mdms sample data
master-config.json
egov-mdms-service/v1/_search
Goal: To onboard developers onto the XState-Chatbot code base so that they can modify existing flows or create new ones.
This document sticks to explaining the chatbot's core features and does not dive into the use cases implemented by the chatbot. There is another document dedicated to it.
NodeJS
PostgreSQL
Kafka_(optional)_
Build a chat flow to facilitate a user to interact with rainmaker modules
Link a chat flow with backend services
Deploy the latest version of xstate-chatbot
Configure /xstate-chatbot to be a whitelisted open endpoint in zuul
Add indexer-config to the egov-indexer to index all the telemetry messages
Environment Variable
Description
WHATSAPP_PROVIDER
The provider through which WhatsApp messages are sent & received. An adapter for ValueFirst is written. If there is any new provider a separate adapter will have to be implemented.
A default console
adapter is provided for developers to test the chatbot locally.
REPO_PROVIDER
The database used to store the chat state. Currently, an adapter for PostgreSQL is provided.
An InMemory
adapter is provided to test the chatbot locally
SERVICE_PROVIDER
If it’s value is configured to be eGov, it will call the backend rainmaker services. If the value is configured as Dummy, dummy data would be used rather than fetching data from APIs.
Dummy option is provided for initial dialog development, and is only to be used locally.
SUPPORTED_LOCALES
A list of comma-separated locales supported by the chatbot.
Other configuration details are mentioned as part of the XState-Chatbot Integration Document.
This chatbot solves the basic form filling aspect of a chat flow. By collecting the information from the user, an API call can be made to the rainmaker backend services to fulfill what the user wants to do. It uses the concept of StateCharts (similar to State Machines) to maintain the state of the user in a chat flow and store the information provided by the user. XState is a JavaScript implementation of StateCharts. All chat flows are coded inside the XState framework.
This chatbot does not have any Natural Language Processing component. In the future, we can extend the chatbot to add such features.
XState is a JavaScript implementation of StateCharts. There is detailed documentation available to study XState. Few of the concepts of XState that are used in Chatbot are listed below. Basic knowledge of these concepts is necessary. It can also be learned while going through the chat flow implementation of pilot use cases of PGR and Bills.
Actions
onEntry
Few tips about using XState. These have been followed throughout the pilot chat flows.
If we want to move to any state which is not at the same hierarchical level, then we should assign it a unique id value. If it has an id value, we can address it using the # qualifier in the target attribute.
As id should be unique, please make sure there aren’t multiple states with the same id value. If there is a duplicate, the machine won’t function as expected.
Any actions(like onEntry) should be surrounded by assign.
This would include almost all functions except the guard condition code snippets.
All the interactions with the user - sending a message to the user and processing an incoming message from the user is coded as a state in the State Machine. It would be a nice start to test any chat flow with the supplementary react-app provided for the developers to execute the state machine locally. (Please follow the guidelines in the README of the react-app.)
We have followed few standard patterns to code any chat interaction. Please try to follow these patterns to code any new chat flow. These patterns are explained below. You can also study those by browsing through the code of the pilot use cases of PGR and Bills.
The chat states would only include dialog-specific code. Any code related to backend service should be written as a part of a separate …-service.js file.
Any code that doesn’t include any asynchronous API call can be written as a part of the onEntry function or action.
If the function needs to make an API call, that would have to be written with the invoke-on Done pattern. The asynchronous function should be written as a part of the service file. The consolidated data returned by it can be processed in the state of the dialog file.
Helper functions are written indialog.js
file. It is advised to use those functions as much as possible rather than writing any custom logic in dialog files.
Apart from the chat flow and its backend service API calls, few other components are present in the project. These components do NOT need to be modified to code any new chat flow or changing an existing chat flow. These components with a short description for each are listed below:
Session Manager: It manages sessions of all the users on a server. It will store the user’s state in a datastore, update it, and read it when any new message is received on the server. Based on the state of the user, it will create a state machine and send the incoming message event to the state machine. It sanctifies the state (any sensitive data like the name and mobile number of a user are removed) before storing the state to the datastore.
Repository: It is the datastore where the states of the users get stored. To reduce dependency, an in-memory repository is also provided, which can be used by configuring an environment variable. So to run the chatbot service, PostgreSQL isn’t a hard dependency, but it is advisable to use the PostgreSQL repo provider.
Channel Provider: There can be many different WhatsApp Providers. Any one of the providers will be configured to be used. A separate console
WhatsApp Provider is present for the developer to test the chatbot server locally. Postman collection to mimic receiving messages from a user to the server is present in the project directory.
Localization: Every message to be sent to the user is stored within the chatbot. Localization service is not being used. These messages are present near the bottom of the dialog files. A separate localization-service.js is provided to get the messages for the localization codes for the messages that are not owned by the chatbot. For example, the PGR complaint types data is under the ownership of the PGR module, and the messages for such can be fetched from the egov-localization-service using the functions provided in the localization-service.js.
Service Provider: To ease the initial dialog development, instead of the coding API calls to the backend services, we can configure the chat flow to use a dummy service. This can be configured using an environment variable and modifying the service-loader.js
file.
Telemetry: Chatbot logs telemetry events to a kafka topic. (Any sensitive data will get masked before indexing the events onto ElasticSearch by egov-indexer.) Following events get logged:
Incoming message
Outgoing message
Transition of state
Indexer service runs as a separate service. This service is designed to perform all the indexing tasks of the digit platform. The service reads records posted on specific kafka topics and picks the corresponding index configuration from the yaml file provided by the respective module. Objective of Indexer service are listed as below.
To provide a one stop framework for indexing the data to elasticsearch.
To create provision for indexing live data, reindexing from one index to the other and indexing legacy data from the datastore.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of Elasticsearch
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Performs three major tasks namely: LiveIndex, Reindex and LegacyIndex.
LiveIndex: Task of indexing the live transaction data on the platform. This keeps the es data in sync with the db.
Reindex: Task of indexing data from one index to the other. ES already provides this feature, indexer does the same but with data transformation.
LegacyIndex: Task of indexing legacy data from the tables to ES.
Provides flexibility to index the entire object, a part of the object or an entirely different custom object all using one input json from modules.
Provides features for customizing index json by field mapping, field masking, data enrichment through external APIs and data denormalization using MDMS.
One stop shop for all the es index requirements with easy-to-write and easy-to-maintain configuration files.
Designed as a consumer to save API overhead. The consumer configs are written from scratch to have complete control over the consumer behaviour.
Step 1: Write configuration as per your requirement. Structure of the config file is explained later in the same doc.
Step 2: Check-in the config file to a remote location preferably github, currently we check the files into this folder https://github.com/egovernments/configs/tree/DEV/egov-indexer -for dev
Step 3: Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-indexer. The file will be added to egov-indexer's environment manifest file for it to be read at start-up of the application.
Step 4: Run the egov-indexer app, Since it is a consumer, it starts listening to the configured topics and indexes the data.
For Indexer Configuration, please refer to the document in Reference Docs table given below.
a) POST /{key}/_index
Receive data and index. There should be a mapping with topic as {key} in index config files.
b) POST /_reindex
This is used to migrate data from one index to another index
c) POST /_legacyindex
This is to run LegacyIndex job to index data from DB. In the request body the URL of the service which would be called by indexer service to pick data, must be mentioned.
In legacy indexing and for collection-service record LiveIndex kafka-connect is used to do part of pushing record to elastic search. For more details please refer to document mentioned in document list.
The URL shortening service is used to shorten long URLs. There may be requirement when we want to avoid sending very long urls to the user via SMS, Whatsapp etc, this service compresses the URL.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Compress long URLs.
Converted short URLs contains id, which is used by this service to identify and get longer URLs.
Environment Variable
Description
host.name
Host name to append in short URL
db.persistance.enabled
The boolean flag to store the short URL in database when flag is set as TRUE.
Deploy latest version of URL Shortening service
Receive long urls and converts them to shorter urls. Shortened urls contains urls to endpoint mentioned next. When user clicks on shortened URL, user is redirected to long URL.
This shortened urls contains path to this endpoint. The service uses id used in last endpoint to get long URL. As response the user is redirected to long URL.
Title
Link
Swagger API Contract
Local Setup
In the existing version of the chatbot, for PGR complaint creation feature, the user has to select his/her city from a drop-down menu by visiting the mseva website. This significantly reduces user convenience as the user is required to constantly switch pages. To overcome the above inconvenience, nlp-engine service is used. The service has an algorithm that uses fuzzy matching and pattern recognition to recognise the city provided by the user as input. Based on the user input, the cities having the highest match ratio with the input are being returned as the output list. A list comprising all the city names in English, Punjabi and Hindi was used as a reference tool for this service.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Python.
egov-mdms service is running and all the data related to the service are added in the mdms repository.
egov-running service is running.
Provides city fuzzy search feature which returns the list of cities having the highest match ratio with the input.
City fuzzy search can support input data in English, Hindi and Punjabi language.
Provides locality fuzzy search feature which returns the list of localities having the highest match ratio with the input.
Environment Variables
Description
MDMS_MODULE_NAME
Contains the module name of mdms required for nlp-engine.
CITY_MASTER
Contains the file name of mdms master file which contains the city names in various locale.
CITY_LOCALE_MASTER
Contains the file name of mdms master file which contains the tenantid of the cities present in CityNames.json
mdms file
STATE_LEVEL_TENANTID
Contains the state level tenantid
Add mdms configs required for nlp-engine service (mdms folder) and restart mdms service.
Deploy the latest version of nlp-engine service.
Whitelist the city and locality fuzzy search API’s.
The nlp-engine service is used to locate the user city and locality by using fuzzy string matching and pattern recognition.
Currently integrated into the chatbots for locating user city and locality for complaint creation use case.
This feature functionality can be extended for the other entities and can be used for a fuzzy search of those different entities.
To integrate, the host of nlp-engine service module should be overwritten in the helm chart.
/nlp-engine/fuzzy/city
should be added as the fuzzy search endpoint for a city search.
/nlp-engine/fuzzy/locality
should be added as the fuzzy search endpoint for locality search.
Title
Link
NLP Chatbot
Link
/nlp-engine/fuzzy/city
/nlp-engine/fuzzy/locality
(Note: All the API’s are in the same postman collection therefore same link is added in each row)
XState-Chatbot is a revamped version of the chatbot, which provides functionality to the user to access PGR module services like file complaint, track complaint, notifications from whats app, It allows the user to view receipts and pay the bills for Property, Trade Licence, FireNOC, Water and Sewerage and BPA service module.
File PGR complaint
Track PGR complaint
Support images when filing complaints
Notifications to citizen when an employee performs any action on the complaint
Allow user to search and pay bills of different modules.
Allow user to search and view receipts of different modules.
Allow user to change the language of their choice to have a better experience.
Put user interactions on an elastic search for Telemetry.
XState chatbot can be integrated with any other module to improve the ease of search and view bills/past payment receipts and to improve speed and convenience for bill payment. It can be integrated with the PGR module for easiness of creation and tracking of the complaint.
Increase in convenience and ease of making the bill payment.
Increase in no. of users opting for online payment.
Improvement in demand collection efficiency
Creating an additional channel for payment.
Remove dependency on mobile/web app or counter.
Whatsapp provider is a third-party service that works in the middle of a user's WhatsApp client and XState-Chatbot server. All messages coming/going to/from user pass through WhatsApp provider. Chatbot calls WhatsApp provider to send messages to the user. When a user responds with any WhatsApp message the WhatsApp provider calls Chatbot service’s configured endpoint with details ex:- user sent message, sender’s number etc.
If any new WhatsApp provider is to be used with a chatbot, code must be written to convert the provider’s incoming messages to the format that the chatbot understands and also final output from the chatbot should be converted to WhatsApp provider’s API request format.
Currently, the XState-Chatbot service is using ValueFirst as the WhatsApp Provider. This will require provider-specific environment variable to be configured. If the provider changes then, all these environment variable will also change. Few of those environment variables are stored as secrets, so these values need to be configured in env-secrets.yaml.
As this is a revamped version of the chatbot service, all of the secrets should already be present. There is no need to create new secrets.
The integration of PGR with a chatbot can be enabled and disabled by making changes in this file. By exporting the respective PGR service file, the PGR service feature can be sable and vice versa.
Configuration of PGR version in chatbot
To configure the PGR module to use in Xstate-chatbot - the below variable values need to change in the environment file as per the requirement.
pgrVersion
pgrUpdateTopic
To configure PGR v2 in XState chatbot then pgrVersion should be ‘v2' and pgrUpdateTopic should be 'update-pgr-request’.
Configuration of city and locality search with nlp-search engine
To enable the fuzzy search for city and locality selection in PGR complaint flow The variable nlp-geoSearch has to be set true in the environment file. To use the nlp-search engine with xstate chatbot, make sure that stable build is deployed and all the mdms data are present for that particular environment. To know more about the nlp-search engine service please refer to the Reference document section.
Adding Information Image in PGR complaint creation and Open search information image
To configure the filestoreid for informational image follow the steps mention below
Download the images from the section Information Images for PGR and Open Search
Upload the image into filestore server. Use the upload file API from this postman collection(https://www.getpostman.com/collections/bdb059c5af698f0d81d6)
For PGR information image mention the filestore id here in environment file .
For Open search information image mention the filestore id here in environment file .
For example:
a) if supportedLocales: process.env.SUPPORTED_LOCALES || 'en_IN,hi_IN'
then valuefirst-notification-resolved-templateid: "12345,6789"
b) if supportedLocales: process.env.SUPPORTED_LOCALES || 'hi_IN,en_IN'
then valuefirst-notification-resolved-templateid: "6789,12345"
(Note: Both the list should not be empty, it must contain at least one element)
Template messages with button are maintained in the same way as describe in previous section (Configuration of push notification template messages)
There are two type of button message
Quick Reply
Call To Action
More details can be found in the value first document.
The integration of the Bill payment and receipt search feature with the chatbot can be enabled and disable by making changes in this file. By exporting the respective bill service and receipt service file, the payment and receipt search feature can be enabled and vice versa.
Configuration of module for Bill payment and Receipt search
To configure the list of modules to appear as an option for payment and receipt, Add the module business service code in the list present in the environment file.
For example:
If bill-supported-modules: "WS, PT, TL"
then Water and Sewerage, Property, Trade license module would appear for bill payment and
receipt search.
Also add the message bundle, validation and service code for locality searcher in egov-bill and egov-receipt file.
Environment Variables
Description
WHATSAPP_BUSINESS_NUMBER
The mobile number to be used on server
VALUEFIRST_USERNAME
Username for configured number for sending messages to user through whatsapp provider API calls
VALUEFIRST_PASSWORD
Password for configured number for sending messages to user through whatsapp provider API calls
GOOGLE_MAPS_API_KEY
Maps API key to access geocoding feature
ROOT_TENANTID
Contains state level tenantid value
SUPPORTED_LOCALES
This variable contains the list supported language in chatbot. If there is a need to add new language in chatbot, then its respective locale need to add in this list.
PGR_VERSION
Contains PGR version value to use (i.e v1 or v2)
PGR_UPDATE_TOPIC
Depends on PGR version respective PGR update kafka topic name should mention here. Example: If PGR_VERSION: 'v2'
then PGR_UPDATE_TOPIC: 'update-pgr-request'
BILL_SEARCH_LIMIT
Limit for showing maximum number of bills on search.
RECEIPT_SEARCH_LIMIT
Limit for showing maximum number of receipts on search.
COMPLAINT_SEARCH_LIMIT
Limit for showing maximum number of complaints on search.
BILL_SUPPORTED_MODULES
Contains the list of modules to be use for bill payment and receipts search.
INFORMATION_IMAGE_FILESTORE_ID
Contains the filestoreid of informational image, which shows how to share the user current location.
OPEN_SEARCH_IMAGE_FILESTORE_ID
Contains the filestoreid of open search informational image, which shows how to use open search pay feature for bill payment
USER_SERVICE_HARDCODED_PASSWORD
This variable contain fixed value of login password and otp. This value has to configured in env-secrets.yaml.
GEO_SEARCH
Boolean flag to enable and disable city / locality nlp search
Configuration of Telemetry File
Add this telemetry file in config repo and mention the filename in respective environment yaml file.
Cron job mdms entry:
Information Images for PGR and Open Search
Title
Link
Chatbot Message Localisation
nlp-search engine
Title
Link
/xstate-chatbot/message
/xstate-chatbot/reminder
/xstate-chatbot/status
v2 configuration details
The Collection service is to serve as a revenue collection platform for all the billing systems through cash, cheque, dd, swipe machine. It enables payment for all services provided by the eGov platform at a single point for the Citizen and counter collection in municipal alike.
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic, etc.
Following services should be up and running:
egov-localization
egov-mdms
egov-idgen
egov-url-shortening
billing-service
Allows citizens to create a payment.
Allows employees to create the payment for the citizen indirectly.
provides facilities to capture partial and advanced payment based on configs.
allows payment cancellation to help with scenarios of bad checks and other failed payment scenarios.
Integrates with billing-service for demand back-update of payment.
deploy the latest version of the collection-services docker build.
The MDMS data configuration uses the same data updated by Billing-Service
Following are the properties in the application.properties
Collection service can be integrated with any organization or system that wants a payment system to keep track of its payments. Organizations can customize part of the application or its functionality based on their requirements.
Easy payments and tracking of payments.
Configurable functionalities according to client requirement
Customer can create a payment using the /payments/_create
Actors on the system can keep track of payments using /payments/_searchendpoint
Once the payment is done but it encounters a technical issue outside of the system then it can be cancelled with /payments/_workflow
For employees to access the payments API the respective module name should be appended after the payment API path - /payments/PT/_workflow - here PT refers to property module.
Doc Links
API List
The consumer sometimes needs additional amounts (Amendments) added to their bill due to reasons from outside of the system. The addition of amounts happens with respect to the consumer code of the entity in the product(PT, WS, etc..,), any unpaid demand in the system is a candidate for amendments.
Prior Knowledge of Billing-Service in Digit framework.
Amendment mainly works with two types of functionality as follows:
Amendment
Demand
Bill Amendment provides a separate flow to enable workflow and validation for the process of adding additional amounts into the existing demands which were done through the respective modules only till this point in time. An amendment will be allowed only when the reason arises from out of the system to add or reduce the amount from the existing bill belonging to an entity. The reasons are as listed
Court case settlement
One time waiver
Write-offs
DCB correction (Old demands in paid status)
Remission for Property Tax
Criteria:
There are certain prerequisites to create an amendment,
presence of demand in the billing system
One of the Reason as Listed above
Valid document proof for the reason
No other Amendment already in workflow
Procedure:
The process of adding Amendment is as follows
Please follow the scenarios and let me know in case of doubts. There are two scenarios on how an amendment will be completed which is based on the paid status of the existing demands in the system.
1. when demand is unpaid/partially paid
create a demand (Or an existing demand can be used) with demand detail → DD1.
Do not pay the bill or make payment partially.
Create an amendment for the same consumer-code (with demand detail → DD2).
approve the amendment, the response should return an amendment with status CONSUMED.
search the demand or fetch bill for the consumer-code, demand/bill should contain demand details of demand and amendment together DD1 and DD2 in the same demand/bill.
2. when demand is completely paid,
create demand and make complete payment or choose a consumer-code which is fully paid.
create amendment (with demand detail → DD1).
Approve amendment, the response should be APPROVED this time.
create new demand for the consumer -code (with demand detail → DD3), demand response should contain two demand details DD1 and DD2 saved to the demand.
Now amendment search will return CONSUMED status after the demand is created.
IMPACT: Does not impact any other functionality other than adding demand details to demands on APPROVAL.
IMPACTED BY: Existence of demands in the system.
WORKFLOW CONFIG:
Amendment integration helps the respective Organization to add additional value to the demand without any change in the system.
Easy to create and simple process of updating demands
helps ease changes into the system which are not part of normal functionality - Amendment of bills in case of legal requirements.
This is integrated into the billing system by default.
Amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create and /amendment/_update can be used to cancel the created ones or update workflow if configured.
{yet to be addded}
API Definition
API LIST
The main objective of the billing module is to serve the Bill for all revenue Business services. To serve the Bill, Billing-Service requires demand. Demands will be prepared by Revenue modules and stored by billing based on which it will generate the Bill.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of KAFKA
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of the demand-based systems.
Following services should be up and running:
user
MDMS
Id-Gen
URL-Shortening
notification-sms
eGov billing service creates and maintains demands.
Generates bills based on demands.
Updates the demands from payment when the collection service takes a payment.
Deploy the latest image of the billing service available.
In the MDMS data configuration, the following master data is needed for the functionality of the billing.
MDMS
Business Service JSON
TAX-Head JSON
Tax-Period JSON
Billing service can be integrated with any organization or system that wants a demand-based payment system.
Easy to create and simple process of generating bills from demands
The amalgamation of bills period-wise for a single entity like PT or Water connection.
Amendment of bills in case of legal requirements.
Customer can create a demand using the /demand/_create
Organization or System can search the demand using /demand/_searchendpoint
Once the demand is raised the system can call /demand/_update endpoint to update the demand as per need.
Bills can be generated using, which is a self-managing API that generates a new bill only when the old one expires /bill/_fetchbill.
Bills can be searched using /bill/_search.
Amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create and /amendment/_update can be used to cancel the created ones or update workflow if configured.
Interaction Diagram V1.1:
Doc Links
API List
What is apportioning?
Adjusting the receivable amount with the individual tax head.
Types of apportioning V1.1
Default order based apportioning(Based on apportioning order adjust the received amount with each tax head).V1.1
Types of apportioning V1.2: (TBD)
Proportionate based apportioning (Adjust total receivable with all the tax head equally)
Order & Percentage based apportioning(Adjust total receivable based on order and the percentage which is defined for each tax head).
Principle of apportioning
The basic principle of apportioning is, if the full amount is paid for any bill then each individual tax head should get nullify with their corresponding adjusted amount.
Example: Case 1: When there are no arrears all tax heads belong to their current purpose:
Case 2: Apportioning with two years of arrear: If the current financial year is 2014-15. Below are the demands
if any payment is not done, and we generating demand in 2015-16 then the demand structure will as follows:
eGov Payment Gateway acts as a liaison between eGov apps and external payment gateways facilitating payments, reconciliation of payments and lookup of transactions' status'.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has pg service persister config path added in it
PSQL server is running and the database is created to store transaction data.
Create or initiate a transaction, to make a payment against a bill.
Make payment for multiple bill details [multi module] for a single consumer code at once.
Transaction to be initiated with a call to the transaction/_create API, various validations are carried out to ensure the sanctity of the request.
The response includes a generated transaction id and a redirect URL to the payment gateway itself.
Various validations are carried out to verify the authenticity of the request and the status is updated accordingly. If the transaction is successful, a receipt is generated for the same.
Reconciliation is carried out by two jobs scheduled via a Quartz clustered scheduler.
Early Reconciliation job is set to run every 15 minutes [configurable via app properties], and is aimed at reconciling transactions which were created 15 - 30 minutes ago and are in PENDING state.
Daily Reconciliation job is set to run once per day and is aimed at reconciling all transactions that are in PENDING state, except for ones which were created 30 minutes ago.
Axis, Phonepe and Paytm payment gateways are implemented.
Additional gateways can be added by implementing the Gateway interface. No changes required to the core packages.
Following properties in the application.properties file in egov-pg-service has to be added and set to default value after integrating with the new payment gateway. In the below table properties for AXIS bank, payment gateway is shown the same relevant property needs to be added for other payment gateways.
axis.active
Bollean lag to set the payment gateway active/inactive
axis.currency
Currency representation for merchant, default(INR)
axis.merchant.id
Payment merchant Id
axis.merchant.secret.key
Secret key for payment merchant
axis.merchant.user
User name to access the payment merchant for transaction
axis.merchant.pwd
Password of the user tp access payment merchant
axis.merchant.access.code
Access code
axis.merchant.vpc.command.pay
Pay command
axis.merchant.vpc.command.status
commans status
axis.url.debit
Url for making payment
axis.url.status
URL to get the status of the transaction
Deploy the latest version of egov-pg-service
Add pg service persister yaml path in persister configuration
The egov-pg-service acts as communication/contact between eGov apps and external payment gateways.
Record of every transaction against a bill.
Record of payment for multiple bill details for a single consumer code at once.
To integrate, host of egov-pg-service should be overwritten in helm chart
/pg-service/transaction/v1/_create should be added in the module to initiates a new payment transaction, on successful validation
/pg-service/transaction/v1/_update should be added as the update endpoint to updates an existing payment transaction. This endpoint is issued only by payment gateways to update the status of payments. It verifies the authenticity of the request with the payment gateway and forward all query params received from a payment gateway
/pg-service/transaction/v1/_search should be added as the search endpoint for retrieving the current status of a payment in our system.
Title
Link
Swagger API Contract
Title
Link
/pg-service/transaction/v1/_create
/pg-service/transaction/v1/_update
/pg-service/transaction/v1/_search
/pg-service/gateway/v1/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
Whenever any user logs an authorization token and a refresh token is generated for him. Using the auth token the client can make rest API calls to the server to fetch data. The auth token has an expiry period. Once the auth token is expired it cannot be used to make API calls. The client will have to generate a new authorization token. This is done by authenticating the refresh token with the server which then generates and sends new authorization token to the client. The refresh token avoids the need for the client to again login whenever Auth token expires.
Refresh token also has an expiry period and once it gets expired it cannot be used to generate new authorization token and the user will have to login again to get a new pair of authorization token and refresh token. Generally, the duration before the expiry of the refresh token is much longer compared to that of auth token. If the user logs out of the account both Auth token and the refresh token will become invalid.
Param
Description
access.token.validity.in.minutes
Duration in minutes for which the authorization token is valid
refresh.token.validity.in.minutes
Duration in minutes for which the refresh token is valid
API
Description
/user/oauth/token
Used to start the session by generating Auth token and refresh token from username and password using grant_type as password. The same API can be used to generate new auth token from refresh token by using grant_type as refresh_token and sending the refresh token with key refresh_token
/user/_logout
This API is used to end the session. The access token and refresh token will become invalid once this API is called. Auth token is sent as param in the API call
`
Apportion service is used to apportion the amount paid against a bill among the different tax heads based on the implemented algorithm. The default algorithm uses order of the tax head to apportion, the tax head with lowest order is apportioned off first while the highest order tax head is apportioned last.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has apportion persister config path added in it
PSQL server is running and database is created to store apportion audit data
Apportion payment in tax heads of bill
Apportion advance amount in tax heads of demand during demand creation
Deploy the latest version of egov-apportion-service service
Add apportion persister yaml path in persister configuration
There is no separate configuration required. The TaxHead master that is configured in the billing service is only used
Any payment service which wants to divide the paid amount into different tax head buckets can integrate with apportion service.
Apportions amount in tax heads
To integrate, the host of egov-apportion-service should be overwritten in helm chart
/apportion-service/v2/bill/_apportion should be called to apportion the bill
/apportion-service/v2/demand/_apportion should be called to apportion advance amount in demands
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
__
This section provides technical details about business service setup, configuration, deployment, and API integration.
This document aims to facilitate communication between the software developers and whoever is localising the chatbot messages. The goal is to make it clear and as unambiguous as possible.
The Google Sheet containing all the messages with the codes is:
The project is organised such that all the messages are contained within the files present inside the /machine directory. /service directory, which is present inside it, also includes files that could contain localization messages.
Guidelines to be followed by developers
(According to the standard pattern followed in the project, all the localization messages will be present near the end of the file in a JavaScript object named “messages”.)
Developers will be the ones first filling up the sheet with codes (and the English version of the messages). Below are the guidelines to be followed when writing the codes in the sheet:
The standard separator to be used is .(dot)
The first part is the filename—Eg: “pgr.”, when the filename is pgr.js.
Use “service.” as a prefix when the file is present inside the /service directory.
In the /service directory, filenames are like egov-pgr.js
For localization messages contained in those files, instead of writing “egov-pgr” just write “pgr”
So the prefix for such files would be “service.pgr.”
All the message bundles would be present in the “messages” object near the end of the file. They have been organized in a pattern in the JS object like fileComplaint.complaintType2Step.category.question
The corresponding localization code for such a message bundle in the sheet would be “pgr.fileComplaint.complaintType2Step.category.question”, where the first “pgr.” is added as the prefix for the file name.
Once the localization codes have been written correctly (and the English version of the messages) in the sheet, it should be easy to add the new message in the corresponding new column. Some guidelines to follow when adding new messages:
The parameter names are written within {{}} (double curly brackets)
The content inside these curly brackets should be written in English even when writing messages for any new language
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
PDFMake: (https://github.com/bpampuch/pdfmake - Connect to preview ):- for generating PDFs
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Refer MDMS data config from here.
Refer integration with details and explanation.
All content on this page by is licensed under a .
Refer billing-service config for MDMS data. the amendment makes use of the same data set.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
Property
Value
Remarks
collection.receipts.search.paginate
true/false
By setting this property true, show you the search result of receipt in a bucket(page) which contains a certain number of records.
is.payment.search.uri.modulename.mandatory=true
TRUE/FALSE
Make module name in URI path mandatory
collection.receipts.search.default.size
Certain number (say 30)
Give the 30 records at a time and next 30 results are in the next page.
collection.is.user.create.enabled
true/false
By setting this property true, enabling the creation of user with receipt creation
receiptnumber.idname
This property is used for creation of receipt number using ID-GEN service
receiptnumber.servicebased
true/false
If servicebased is set to false, use default state level format for the format of receipt number and if it is set to true the format for the receipt number has to be mentioned in MDMS
receiptnumber.state.level.format
[cy:MM]/[fy:yyyy-yy]/[SEQ_COLL_RCPT_NUM]
Default state level format for the receipt number.
collection.payments.search.paginate
true/false
By setting this property true, show you the search result of payment records in a bucket(page) which contains a certain number of records.
egov.collection.payment-create
The kafka topic on which the record has to push/pull when payment is created.
egov.collection.payment-cancel
The kafka topic on which the record has to push/pull when payment is cancelled.
egov.collection.payment-update
The kafka topic on which the record has to push/pull when payment is updated.
Title
Link
Billing-service
Id-Gen service
url-shortening
MDMS
Title
Link
/payments/_create
/payments/_update
/payments/_workflow
Title
Link
Collection Service
Billing Service
API Swagger Documentation
Title
Link
/apportion-service/v2/bill/_apportion
/apportion-service/v2/demand/_apportion
/amendment/_create, _update
bs.businesscode.demand.updateurl
Each module’s application calculator should provide its own update URL. if not present then a new bill will be generated without making any changes to the demand.
bs.bill.billnumber.format
BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}]
IdGen format for the bill number
bs.amendment.idbs.bill.billnumber.format
BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}]
is.amendment.workflow.enabled
true/false
enable disable workflow of bill amendment
Title
Link
Id-Gen service
****
url-shortening
MDMS
Title
Link
/demand/_create, _update, _search
/bill/_fetchbill, _search
/amendment/_create, _update
TaxHead
Amount
Order
Full Payment(2000)
Partial Payment1(1500)
Partial payment2(750)
Partial payment2 with rebate(500)
Pt_tax
1000
6
1000
1000
750
750
AdjustedAmt
1000
-250
-750
-750
RemainingAMTfromPayableAMT
0
0
0
0
Penality
500
5
500
500
AdjustedAmt
500
-500
RemainingAMTfromPayableAMT
1000
250
Interest
500
4
500
500
AdjustedAmt
500
-500
RemainingAMTfromPayableAMT
1500
750
Cess
500
3
500
500
AdjustedAmt
500
-500
RemainingAMTfromPayableAMT
2000
1250
Exm
-250
1
-250
-250
AdjustedAmt
-250
250
RemainingAMTfromPayableAMT
2250
1750
Rebate
-250
2
-250
-250
AdjustedAmt
-250
250
RemainingAMTfromPayableAMT
2500
750
TaxHead
Amount
TaxPeriodFrom
TaxPeriodTo
Order
Purpose
Pt_tax
1000
2014
2015
6
Current
AdjustedAmt
0
Penality
500
2014
2015
5
Current
AdjustedAmt
0
Interest
500
2014
2015
4
Current
AdjustedAmt
0
Cess
500
2014
2015
3
Current
AdjustedAmt
0
Exm
-250
2014
2015
1
Current
AdjustedAmt
0
TaxHead
Amount
TaxPeriodFrom
TaxPeriodTo
Order
Purpose
Pt_tax
1000
2014
2015
6
Arrear
AdjustedAmt
0
Pt_tax
1500
2015
2016
6
Current
AdjustedAmt
0
Penalty
600
2014
2015
5
Arrear
AdjustedAmt
0
Penalty
500
2015
2016
5
Current
AdjustedAmt
0
Interest
500
2014
4
Arrear
AdjustedAmt
0
Cess
500
2014
3
Arrear
AdjustedAmt
0
Exm
-250
2014
1
Arrear
AdjustedAmt
0
Environment Variables
Description
egov.apportion.default.value.order
If set to true will apportion of the negative amount first irrespective of tax head order
DIGIT offers key municipal services such as Public Grievance & Redressal, Trade License, Water & Sewerage, Property Tax, Fire NOC, and Building Plan Approval.
The inbox service is an aggregation service which aggregates data of municipal services and workflow based on given complex search criteria and returns applications and workflow data in paginated manner. The service also returns the total count matching the search criteria.
This service allows to search both the module objects as well as processInstance
(Workflow record) based on the provided criteria for any of the municipal services. For this, it uses a module specific configuration which is stored in application.properties as a key value map, where the key is the businessService name while the value is the configuration map. An sample configuration is attached below -
Here, the key of the config map are the business services of PT module for which inbox has to be configured. Now, inside the search definition -
searchPath
- Points to the search URL of the municipal module
dataRoot
- This is the search response key that we get from module search, e.g. in Property module, the search response returns response objects inside “Properties” key.
applNosParam
- This is the parameter with which workflow search is called once the module objects are retrieved from module search. This parameter is the filed on which module table is joined with the workflow process instance table, e.g. in case of Property module it is “acknowldgementNumber”.
businessIdProperty
- This is the parameter with which we search module objects in case of empty moduleSearchCriteria
by performing the workflow search first. Again, this parameter is the field on which we join module table and workflow process instance table, e.g. in case of Property module it is “acknowldgementNumber”.
applsStatusParam
- This is the application status field name for the module upon which search is being performed, e.g. in case of Property module, it is “status”.
To provide pagination and total count across multiple modules, the inbox service is integrated with searcher. The searcher provides the list of ids and the total count of applications, based on those further enrichment is done by inbox service and results are returned to the API. Sample configuration link for PT and TL module is attached below:
Details will be updated soon...
The Collection service is to serve as a revenue collection platform for all the billing systems through cash, cheque, dd, swipe machine. It enables payment for all services provided by the eGov platform at a single point for the Citizen and counter collection in municipal alike.
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic, etc.
Following services should be up and running:
egov-localization
egov-mdms
egov-idgen
egov-url-shortening
billing-service
Allows citizens to create a payment.
Allows employees to create the payment for the citizen indirectly.
provides facilities to capture partial and advanced payment based on configs.
allows payment cancellation to help with scenarios of bad checks and other failed payment scenarios.
Integrates with billing-service for demand back-update of payment.
deploy the latest version of the collection-services docker build.
The MDMS data configuration uses the same data updated by Billing-Service
Billing Service | Configuration-Details: Refer MDMS data config from here.
Following are the properties in the application.properties
Property
Value
Remarks
collection.receipts.search.paginate
true/false
By setting this property true, show you the search result of receipt in a bucket(page) which contains a certain number of records.
is.payment.search.uri.modulename.mandatory=true
TRUE/FALSE
Make module name in URI path mandatory
collection.receipts.search.default.size
Certain number (say 30)
Give the 30 records at a time and next 30 results are in the next page.
collection.is.user.create.enabled
true/false
By setting this property true, enabling the creation of user with receipt creation
receiptnumber.idname
This property is used for creation of receipt number using ID-GEN service
receiptnumber.servicebased
true/false
If servicebased is set to false, use default state level format for the format of receipt number and if it is set to true the format for the receipt number has to be mentioned in MDMS
receiptnumber.state.level.format
[cy:MM]/[fy:yyyy-yy]/[SEQ_COLL_RCPT_NUM]
Default state level format for the receipt number.
collection.payments.search.paginate
true/false
By setting this property true, show you the search result of payment records in a bucket(page) which contains a certain number of records.
egov.collection.payment-create
The kafka topic on which the record has to push/pull when payment is created.
egov.collection.payment-cancel
The kafka topic on which the record has to push/pull when payment is cancelled.
egov.collection.payment-update
The kafka topic on which the record has to push/pull when payment is updated.
Collection service can be integrated with any organization or system that wants a payment system to keep track of its payments. Organizations can customize part of the application or its functionality based on their requirements.
Easy payments and tracking of payments.
Configurable functionalities according to client requirement
Customer can create a payment using the /payments/_create
Actors on the system can keep track of payments using /payments/_search
endpoint
Once the payment is done but it encounters a technical issue outside of the system then it can be cancelled with /payments/_workflow
For employees to access the payments API the respective module name should be appended after the payment API path - /payments/PT/_workflow
- here PT refers to property module.
Port foward the collection-service to current environment where the IFSCCODE bankdetails data to be migrated. Find the sample command below. 1kubectl port-forward collection-services-76b775f976-xcbt2 8055:8080 -n egov
Import postman collection from API list which refers as /preexistpayments/_update
and run with the same localhost to where we port forwarded using above command.
Expected result. In EGCL_PAYMET table where IFSCODE data is present for those record, EGCL_PAYMET.ADDITIONALDETAILS bankdetails will be updated.
Ex: For IFSCCODE : UCBA0003047 Response from API https://ifsc.razorpay.com/UCBA0003047 will be update in EGCL_PAYMET.ADDITIONALDETAILS as {"bankDetails": {"UPI": true, "BANK": "UCO Bank", "CITY": "BHIKHI", "IFSC": "UCBA0003047", "IMPS": true, "MICR": "151028452", "NEFT": true, "RTGS": true, "STATE": "PUNJAB", "SWIFT": "", "BRANCH": "BHIKHI", "CENTRE": "MANSA", "ADDRESS": "ADJOINING HP PETROL PUMP MANSA ROADDISTRICT MANSA","BANKCODE":"UCBA","DISTRICT":"MANSA","CONTACT":"+918288822548"}
Billing-Collection-Integration Refer integration with details and explanation.
Title
Link
Billing-service
Id-Gen service
url-shortening
MDMS
Title
Link
/payments/_create
/payments/_update
/payments/_workflow
/preexistpayments/_update
DSS has two sides to it. One is the process in which the Data is pooled to ElasticSearch and the other being the way it is fetched, aggregated, computed, transformed and sent across. As this revolves around a variety of Data Set, there is a need for making this configurable. So that, tomorrow, given a new scenario is introduced, then it is just a configuration away from getting the newly introduced scenario involved in this flow of the process.
This document explains the steps on how to define the configurations for both sides of DSS. Analytics and Ingest Pipeline Services.
Ingest: Micro Service which runs as a pipeline and manages to validate, transform and enrich the incoming data and pushes the same to ElasticSearch Index
Analytics: Micro Service which is responsible for building, fetching, aggregating and computing the Data on ElasticSearch to a consumable Data Response. Which shall be later used for visualizations and graphical representations.
JOLT: JSON to JSON transformation library written in Java where the "specification" for the transform is itself a JSON document
Modules / Domain Level: These are the Services in this context. Each of the services, such as Property Tax, Trade License, Water and Sewerages are considered as Modules / Domains
Chart: Each individual graphical representation is considered as a Chart in specific. For example, a Metric of Total Collection is considered as a Chart.
Visualization: Group of different Charts is considered as a Visualization. For example, the group of Total Collection, Target Collection and Target Achieved is considered as a Metric Collection of Charts and thus it becomes a Visualization.
Below is the list of configurations -
Topic Context Configurations
Validator Schema
JOLT Transformation Schema
Enrichment Domain Configuration
JOLT Domain Transformation Schema
Descriptions
Topic Context Configurations
Topic Context Configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service is asked to receive those data and pass it through the pipeline, the context and the version of the data being received has to be set. This configuration is used to identify which kafka topic consumed the data and what is the mapping for that.
Click here for Full Configuration
Parameter Name
Description
topic
Holds the name of the Kafka Topic on which the data is being received
dataContext
Context Name which needs to be set for further actions in the pipeline
dataContextVersion
Version of the Data Structure is set here as there might be different structured data at a different point in time
Validator Schema
Validator Schema is a configuration Schema Library from EveritBy passing the data against this schema, it ensures whether the data abides by the rules and requirements of the schema which has been defined.
Click here for an example configuration
JOLT Transformation Schema
JOLT is a JSON to JSON Transformation Library. In order to change the structure of the data and transform it in a generic way, JOLT has been used.
While the transformation schemas are written for each Data Context, the data is transformed against the schema to obtain a transformed data.
Follow the slide deck for JOLT Transformations
Click here for an example configuration
Enrichment Domain Configuration
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
Click here for an example configuration
Paremter Name
Description
id
Unique Identifier for the Configuration within the configuration document
businessType
This defines as in which kind of Domain / Service is the data related to. Based on this business type, query and enhancements are decided
indexName
Based on Business Type, Index Name is defined as to which index has to be queried to get the enhancements done from
query
Query to execute to get the Domain Level Object is defined here.
targetReferences
sourceReference
Fields which are variables in order to get the domain level objects are defined here. The variables and where all the values has to be picked from are documented here
JOLT Domain Transformation Schema
As a part of Enhancement, once the domain level object is obtained, we might not need the complete document as is in the end data product.
Only those parameters which should be or can be used for aggregation and representation are to be held and others are to be discarded.
In order to do that, we make use of JOLT again and write schemas to keep the required ones and discard the unwanted ones.
The above configuration is used to transform the data response in the enrichment layer.
Click here for an example configuration
Use case:- JOLT Transformation Schema for collection V2
JOLT transformation schema for payment-v1 has taken as a use case to explain the context collection and context version v2. The payment records are processed/transformed with the schema. The schema supports splitting the billing records into an independent new record. So if there are 2 bill items in the collection/payment incoming data then this results in 2 collection records in turn.
Click here for an example configuration
Here: $i, the variable value that gets incremented for the number of records of paymentDetails
$j, the variable value that gets incremented for the number of records of billDetails.
Note: For kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
Below is the list of configurations
Chart API Configuration
Master Dashboard Configuration
Role Dashboard Mappings Configuration
Description
Chart API Configuration
Each Visualization has its own properties. Each Visualization comes from different data sources (Sometimes it is a combination of different data sources)
In order to configure each visualization and its properties, we have Chart API Configuration Document.
In this, Visualization Code, which happens to be the key, will be having its properties configured as a part of the configuration and are easily changeable.
Click here for an example configuration
Parameter Name
Description
Key (e.g: totalApplication)
This is the Visualization Code. This key will be referred to in further visualization configurations. This is the key that will be used by the client application to indicate which visualization is needed for display.
chartName
The name of the Chart has to be used as a label on the Dashboard. The name of the Chart will be a detailed name. In this configuration, the Name of the Chart will be the code of Localization which will be used by Client Side
queries
Some visualizations are derived from a specific data source. While some others are derived from different data sources and are combined together to get a meaningful representation. The queries of aggregation which are to be used to fetch out the right data in the right aggregated format are configured here.
queries.module
The module / domain level, on which the query should be applied on. Property Tax is PT, Trade License is TL. If the query is applied across all modules, the module has to be defined as COMMON
queries.indexName
The name of the index upon which the query has to be executed is configured here.
queries.aggrQuery
The aggregation query in itself is added here. Based on the Module and the Index name specified, this query is attached to the filter part of the complete search request and then executed against that index
queries.requestQueryMap
Client Request would carry certain fields which are to be filtered. The parameters specified in the Client Request are different from the parameters in each of these indexed documents. In order to map the parameters of the request to the parameters of the ElasticSearch Document, this mapping is maintained
queries.dateRefField
Each of these modules has separate indexes. And all of them have their own date fields.
When there is a date filter applied against these visualizations, each of them has to apply it against their own date reference fields. In order to maintain what is the date field in which index, we have this configured in this configuration parameter
chartType
As there are different types of visualizations, this field defines as what is the type of chart / visualization that this data should be used to represent.
Chart types available are:
metric - this represents the aggregated amount/value for records filter by the aggregate es query
pie - this represents the aggregated data on grouping. This is can be used to represent any line graph, bar graph, pie chart or donuts
line - this graph/chart is data representation on date histograms or date groupings
perform - this chart represents groping data as performance-wise.
table - represents a form of plots and value with headers as grouped on and list of its key, values pairs.
xtable - represents an advanced feature of the table, it has additional capabilities for dynamic adding header values.
valueType
In any case of data, the values which are sent to plot might be a percentage, sometimes an amount and sometimes it is just a count. In order to represent them and differentiate the numbers from the amount from percentage, this field is used to indicate the type of value that this Visualization will be sending.
action
Some of the visualizations are not just aggregating on data source. There might be some cases where we have to do a post aggregation computation. For Example, in the case of Top 3 Performing ULBs, the Target and Total Collection is obtained and then the percentage is calculated.
In these kinds of cases, what is the action that has to be performed on that data obtained, is defined in this parameter.
documentType
The type of document upon which the query has to be executed is defined here.
drillChart
If there is a drill down on the visualization, then the code of the Drill Down Visualization is added here.
This will be used by Client Service to manage drill-downs
aggregationPaths
All the queries will be having Aggregation names in it. In order to fetch the value out of each Aggregation Responses, the name of the aggregation in the query will be an easy bet. These aggregation paths will have the names of Aggregation in it.
_comment
In order to display information on the “i” symbol of each visualization, Visualization Information is maintained in this field.
Master Dashboard Configuration
Master Dashboard Configuration is the main configuration that defines which Dashboards that are to be painted on the screen.
It includes all the Visualizations, their groups, the charts which comes within them and even their dimensions as what should be their height and width.
Click here for an example configuration
Parameter Name
Description
name
Name of the Dashboard which has to be displayed as Page Heading
id
Unique Identifier of the Dashboard which should be used later for Querying each of these Visualizations
isActive
Active Indicator which can be used to quickly disable a dashboard if required.
style
Style of the Dashboard. Whether it should be a linear one or a tabbed one. This information is maintained in this parameter.
visualizations
The list of visualizations that are to be displayed in the Dashboard is listed out here.
visualizations.row
The row identifier for each Visualization are mentioned here
The name of an individual visualization is added here
visualizations.vizArray
The list of Charts within the Visualization is specified in this list.
Group of Charts is given an ID to have a placement on the Dashboard. This unique identifier is maintained in this field.
Group of Charts is given a name that can be displayed on the group on Dashboard in that row.
visualizations.vizArray.dimensions
Each of these group of charts is given a dimension based on which they are placed in a specific row in a dashboard
visualizations.vizArray.vizType
As there are multiple charts grouped into one visualization, the type of Visualization needs to be specified in order to indicate to the client application what goes inside each of these visualizations and charts inside them
vizType used for any other dashboards:- metric-collection, chart, performing-metric
metric-collection:- Used to specify the type as single or group of metric chart type
2. performing-metric:- Used perform chart type
3. chart:- Used chart type for pie, donut, table, bar, horizontal bar, line
vizType used for the Home page:- collection, module
collection: used in UI style as full width
2. module: used in UI style for specific width.
visualizations.vizArray.noUnit
visualizations.vizArray.isCollapsible
visualizations.vizArray.ref
The value types of these charts are different. Some are numbers, some are amounts, some are percentage.
In the case of amounts, there is a requirement to display in Lakhs, Crores and Units. In order to indicate the client application whether to display these units or not, we have this boolean to control that
The value type is for card/visualisation collapsible as boolean values.
This object contains url (as mandatory), logoUrl (optional), type(optional).
visualizations.vizArray.charts
The list of individual charts inside a Visualization Group is maintained in this array list
Individual Chart Number Identifier to indicate the uniqueness of Charts
Name of the Chart which can be a header label for Charts within a Visualization
visualizations.vizArray.charts.code
Code of the Chart is the indicator that has to be sent to Server Side to get the data for representing the Visualization.
visualizations.vizArray.charts.chartType
Type of Chart which has to represent the data result set that is obtained is specified here
chartType:- bar, horizontalBar, line, donut, pie, metric, table
visualizations.vizArray.charts.filters
Filters that can be applied to the Visualization and what are the fields which are filterable are mentioned here.
visualizations.vizArray.charts.headers
In some cases, there are headers which can be a title or additional information for the Chart Data which gets represented. This field is kept open to accommodate the information which can be sent along with the Chart Data in itself.
Role Dashboard Mappings Configuration
Master Dashboard Configuration which was explained earlier hold the list of Dashboards that are available. Given the instance where Role Action Mapping is not maintained in the Application Service, this configuration will act as Role - Dashboard Mapping Configuration
In this, each Role is mapped against the Dashboard which they are authorized to see
This was used earlier when the Role Action Mapping of eGov was not integrated. Later, when the Role Action Mapping started controlling the Dashboards to be seen on the client-side, this configuration is just used to enable the Dashboards for viewing.
Click here for an example configuration
Parameter Name
Description
roles
List of Roles that are available in the system
roles._comment
Role Description and comment on why does this role has an entry in this configuration and sums up the summary as to what are the things that are to be enabled.
roles.roleId
Unique Identifier of the Role for which Access is being given
roles.roleName
Name of the Role for which the access is being given
roles.isSuper
Boolean flag which defines whether the Role is a Super User or not
roles.orgId
Organization to which the Role belongs to
roles.dashboards
List of Dashboards that are enabled for the Role
Name of the individual Dashboard which has been enabled
Identifier of the individual Dashboard which has been enabled
Adding Roles and Dashboards :
To add a new role, RoleDashboardMappingsConf.json (roles node) configuration file has to be modified as below
Note: Any number of roles & dashboards can be added
Below as in Figure 9. is a sample to add a new role object, new dashboard object
To add a new dashboard, MasterDashboardConfig.json (dashboards node) has to be modified as below in Figure 10.
Note: dashboards array add a new dashboard as given below
To add new visualisations, again MasterDashboardConfig.json (vizArray node) has to be modified as below as shown in Figure 11.
Note: vizArray is to hold multiple visualizations
To add a new chart, chartApiConf.json has to be modified as shown below. A new chartid has to be added with the chart node object.
Metric chart Sample as shown in Figure 12.
Pie chart Sample as shown in Figure 13.
Line chart Sample as shown in Figure 14.
****
Table chart Sample: This chart comes in 2 kind - table and xtable.
table (as shown in Figure 15.) type allows to added aggregated fields added as available in the query keys, hence to extract the values based on the key, aggegationPaths needs to add along with their data type as in pathDataTypeMapping.
xtable(as shown in Figure 16.) type allows to add multiple computed fields with the aggregated fields dynamically added.
To add multiple computed columns, computedFields [] where actionName (IComputedField<T> interface), fields [] names as in existing in query key, newField as name to appear for computation must be defined.
https://github.com/egovernments/configs/blob/master/egov-dss-dashboards/dashboard-analytics/ChartApiConfig.json for the full configuration in detail.
Steps to create charts and visualise are:
Create/Add a chart in chartApiConf.json
Add a visualization for the existing dashboard in MasterDashboardConfig.json as defined above.
Or in order to create/add a new dashboard create the dashboard in MasterDashboardConfig.json and create a role in RoleDashboardConfig.json
Configuration Changes for DrillThrough :
Example Drill through in Ward table in Property Dashboard.
wardDrillDown is the visualization code for PT Drill Down. kind is the attribute that shows the type of visualization code. Apart from two things all the attributes are common.
Example Drill through in ComplaintList table in PGR Dashboard.
complaintDrillDown is the visualization code for PGR Drill Down.
The above complaintDrillDown visualization code called in the drill chart parameter.
V2 Technical Document for UI
This release for DSS focuses on improving user experience and ability given to the user to get deeper insights using drill through and comparison indicators in tables.
The release includes the following features:
Breadcrumbs for better navigation
Drill through options in tables and charts
Comparison indicators in Table
In addition to the left navigation panel, the addition of breadcrumbs is also useful to provide a better sense of the current page insight. It is also very much helpful for mobile navigation. The user can navigate using the breadcrumbs by clicking on the required parent menu.
Technical Implementation Details
It Works based on the Current Route URL and previous Route URL
File Details
https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/Breadcrumbs.js
The ability provided in DSS to configure the drill through for required options in tables as well as charts. The drill through options is useful in configuring the required hierarchy of data set. This helps users to go up to 'N' levels to get deeper insights
Technical Implementation Details:
Drill down /Drill through in Tables, is based on the drillDownChartId and filter.
Here chart id is used for the subsequent call to fetch the next table along with the applied/selected filters.
File Details
Drill throughs in piecharts :
It is Similar to Dilldown in tables, here Drill through in piecharts are based on the drillDownChartId field in the parent piechart
File Details,
Providing better insights about the metric performances of different dimensions, a comparison indicator is required inside data tables comparing usually with a different time range (last year/last month) and what is percentage change with time.
Technical Implementation Details:
For Comparing with previous year data in every table data, the same request object will be used by changing the time range to the previous year/month/week.
File Details
The following Method along with parameters is used to fetch the previous year data.
after receiving last year data it is compared with current year data and will be shown insight data will be shown, comparison logic is present in uiTable.js
TimeFilter
The current time component is not very intuitive and user friendly. So New library react-date-range was used to enhance the time filter.
File Details
Event Duration Graphs
Ability to generate graphs showcasing time spent between multiple events like average turnaround time, complaint assigning time, etc.
A DSS_EVENT_DURATION_GRAPH was added in the PGR config
API
Action id
Roles
1
/localization/messages/v1/_search
1531
SUPERUSER,EMPLOYEE,CITIZEN,GRO,DGRO,
2
/egov-mdms-service/v1/_search
954
LOA_CREATOR,SUPERUSER,WO_CREATOR,AE_CREATOR,WORKS_MASTER_CREATOR,
3
/dashboard-analytics/dashboard/getDashboardConfig/propertytax
1892
STADMIN
4
/dashboard-analytics/dashboard/getDashboardConfig/home
1889
STADMIN
5
/dashboard-analytics/dashboard/getDashboardConfig/tradelicense
1893
STADMIN
6
/dashboard-analytics/dashboard/getDashboardConfig/pgr
1894
STADMIN
7
/dashboard-analytics/dashboard/getDashboardConfig/ws
2010
STADMIN
8
/dashboard-analytics/dashboard/getChartV2
1890
STADMIN, EMPLOYEE
A decision support system (DSS) is a composite tool that collects, organizes and analyzes business data to facilitate quality decision-making for management, operations and planning. A well-designed DSS aids decision makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives and business models. DSS analysis helps organizations to identify and solve problems, and make decisions.
Code Git Repos: https://github.com/egovernments/frontend/tree/master/web/dss-dashboard
State-Level Admin
Commissioner
Domain-Level Employee
There are three types of dashboards -
Home page (refer figure 1).
Overview page (refer figure 2).
Module level dashboard (refer figure 3).
The home page contains multiple cards, each card is clickable.
There are two types of cards, i.e, Overview card and module-level card.
Overview and Module level card is differentiated by vizType,
Overview card: On click of overview card, it will navigate to overview page. vizType for Overview is collection.
Module Level card: On Click of Module level card, it will navigate to Module level dahsboard. vizType is module (i.e Property Tax, Trade License etc).
Request Payload for dashboardConfig:
auth-token : which is for authenticate the request and it will fetch from a local storage key called “Employee.token”
DashboardConfig API Response
roleName: Which type of user.
Visualizations: Key contains all configuration for displaying the visualization like rows with charts etc please refer to figure 1.3.
In Figure 1.3, vizType key will define the module UI like
Collection Chart & Module Chart refer the figure 1
****
Visualizations List
In dashboardConfig response visualizations key contains all rows & charts details(refer figure 1.3).
1.Each row contains the visual details like name,vizType,noUnit,isCollapsible,charts etc ****(refer figure 1.3).
name - Name of visualization.
vizType - type of visualization like COLLECTION,MODULE,METRIC-COLLECTION, PERFORMING-METRIC, CHART.
COLLECTION - In home page, contains the collection data (refer figure 1).
MODULE - In home page, contains the module level data (refer figure 1).
METRIC-COLLECTION - In Overview/Module Level Page, contains the collection data (refer figure 2.1).
PERFORMING-METRIC -In Overview/Module Level Page, contains the top/bottom performing data (refer figure 2.2).
CHART - In Overview/Module Level Page, contains the below visualizations (refer figure 2.3 to figure 2.7).
PIE CHART (refer figure 2.3)
LINE CHART (refer figure 2.4)
BAR CHART (refer figure 2.5)
HORIZONTAL BAR CHART (refer figure 2.6)
TABLE CHART (refer figure 2.7)
Figure : 2.1 - Metric-collection.
Figure : 2.2 - PERFORMING-METRIC
Figure : 2.3 - CHART - PIE
Figure : 2.4 - Chart - LINE
Figure : 2.5 - Chart - BAR
Figure: 2.6 - Chart - HORIZONTAL BAR
Figure: 2.7 - Chart - TABLE
Figure: 2.8 - GLOBAL FILTERS
Figure: 2.9 - DOWNLOAD & SHARE BUTTON
ULB Dashboard
ULB Dashboard is having different filters, i.e ULB’s and Wards/Blocks. The data to the filters are loaded from below MDMS API -
https://dev.digit.org/egov-mdms-service/v1/_search
Each ULB dashboard, overview Dashboard and module-level pages contain different filters and are identified by roleName in configs API.
The Wards/Blocks filter is a dependable filter, which gets loaded on ULB selection.
In the ULB dashboard, the On-page ULB filter will be applied across all the charts and for the Performance chart, the default ULB filter will not be applied.
Overview and all module level page is having a ULB dashboard.
GLOBAL Filters (refer to figure 2.8)
Filters will be loaded from MDMS API.
https://dev.digit.org/egov-mdms-service/v1/_search
Filters will be loaded on the basis of roleName,
Admin role: For the Module level page, Date, DDR and ULB filter will be loaded
For Overview level page, Date, DDR, ULB and Service filter will be loaded
Commissioner role: For the Module level page, Date, ULB and Wards/Blocks filters will be loaded.
For the Overview page, Date, ULB and Service filters will be loaded.
3.Denomination filter:
Denomination filter having three option to display amount and number in a particular format.
Crore
Lack
Unit
Denomination filter will not be applied to the percentage and text (refer to figure 2.10). The type of data is identified by a symbol in the plots of charts API.
Figure 2.10
Custom Date Filter
If duration < 15 days, it will display data day-wise.
If duration <= 30 days, it will display data week-wise.
If duration >30, it will display data monthly-wise.
Tabs
Currently, the dashboard is having two types of tabs,
Revenue (refer figure: 4.1).
Service (refer figure: 4.1).
Tabs are identified by name in visualizations of config API.
Table Chart with drill-down
In table response, filter key & drillDownChartId is having value means its Drill down table.
Cards
Each card header is localized and having an info icon with a tooltip option that displays the header and can display a description.
The number of cards in a row and in a page is driven by the backend. Backend provides the row number to each card where it should be displayed.
Card containing option icon which contains Image download and Image share option.
Image download and share user id from vizArray in order to differentiate each card in a page.
Download and Share (refer to figure 2.9)
1.Download having two option to download data, i.e, Image and PDF
Share:
Share creates the Image/PDF and uploads it S3 using below API and returns file id,
https://mseva-uat.lgpunjab.gov.in/filestore/v1/files
Using file Id file will be fetched using below API
https://mseva-uat.lgpunjab.gov.in/filestore/v1/files/url
Each S3 image will be shortened using below API
https://mseva-uat.lgpunjab.gov.in/egov-url-shortening/shortener
5. Configurations
Github link for config: https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/config/configs.js
BASE URL: End point of REST API for dashboard.
FILE Upload: End point of REST API for file upload.
FETCH FILE: End point of REST API for file fetch.
MDMS: End point of REST API for fetch MDMS Data.
SHORTEN URL: End point of REST API for Shorten URL, which is used for share via Email / What's app.
CHART COLOR CODE: Color code object for all charts.
MODULE LEVEL: for global filters, which contains services name & filter key.
SERVICES: for global filter, service filter.
6. Upload Localization keys:
code: pre-defined key for back-end.
message: message contains the value for the key.
module: rainmaker-dss
locale: contains locale data
for more details eGov team to be documented
Module name: rainmaker-dss
NPM Module Used
****https://docs.google.com/spreadsheets/d/1AdwSGxUZoSmVcSc3PtujGMRCKpNaAEYgAn_8XNTF2vM/edit?usp=sharing
Steps to setup DSS in Local
Step 1: Run as independent, switch to dss-dashboard folder
Step 2: We have to get the below details from the environment website and update the localstorage in the browser.
Employee.tenant-id Employee.user-info Employee.token Employee.module Employee.locale localization_en_IN locale
Step 3: Run Yarn install and yarn start to start working on dss in local setup.
DSS Features Enhancements V2:
DSS Features Enhancements V2 Technical Document for UI
This specifies the migration steps which is specific to payment index .
Add index name dss-payment_v2 as below:
In kibana, dev tools, apply the below command
PUT dss-payment_v2
{} // add mapping file content here. mapping.json as attached below
Note: This name should be as the value present in ingest es.index.namemapping.json24 May 2021, 11:15 AM
Ingest pipeline application properties contain es.direct.push supposed to be set true for testing.
Sno
Property name
Value
Description
1.
es.direct.push
true
the transformed data will be pushed to ES index directly.
2.
es.direct.push
false
the transformed data will be lying at egov-dss-ingest-enriched topic
SNo
Name
Description
Method
End Point
Body
POST
{host}/dashboard-ingest/ingest/migrate/paymentsindex-v1/v2
{"RequestInfo":{"authToken":"2ba70924-1bba-4a9b-b55d-2e9471bf3081"}}
2.
CURL
Note: After migration ensure dss-payment_v2 data has been populated and available
In kibana, dev tools verify using below command
Configure billing services to allow bill amendments
The consumer sometimes needs additional amounts (Amendments) added to their bill due to reasons external to the system. The addition of amounts happen with respect to the consumer-code of the entity in the product(PT, WS etc..,), any unpaid demand in the system is a candidate for amendments.
Amendment mainly works with two types of functionality as follows:
Amendment
Demand
The main objective of the Bill-Amendment module is to create Credit/ Debit Notes against the bills for consumers who need an additional amount to be added to their bill.
Create Amendment
Search Amendment
Update Demand
Update Amendment
Bill Amendment provides a separate flow to enable workflow and validation for the process of adding additional amount into the existing demands which were done through the respective modules only till this point in time. An amendment will be allowed only when the reason arises from out of the system to add or reduce the amount from the existing bill belonging to an entity. The reasons are as listed below -
Court case settlement
One time waiver
Write-offs
DCB correction (Old demands in paid status)
Remission for Property Tax
There are certain prerequisites to create an amendment,
Presence of demand in the billing system
One of the reasons listed above
Valid document proof for the reason
No other Amendment already in workflow
The process of adding Amendment is as follows
There are two scenarios on how an amendment will be completed which is based on the paid status of the existing demands in the system.
1. When demand is unpaid/partially paid
create a demand (Or an existing demand can be used) with demand detail → DD1.
Do not pay the bill or make payment partially.
Create an amendment for the same consumer-code (with demand detail → DD2).
approve the amendment, the response should return an amendment with status CONSUMED.
search the demand or fetch bill for the consumer-code, demand/bill should contain demand details of demand and amendment together DD1 and DD2 in the same demand/bill.
2. When demand is completely paid
create demand and make complete payment or choose a consumer-code which is fully paid.
create amendment (with demand detail → DD1).
Approve amendment, the response should be APPROVED this time.
create new demand for the consumer -code (with demand detail → DD3), demand response should contain two demand details DD1 and DD2 saved to the demand.
Now amendment search will return CONSUMED status after the demand is created.
Does not impact any other functionality other than adding demand details to demands on APPROVAL.
IMPACTED BY:
Existence of demands in the system.
Persister Service persists data in the database in a sync manner providing very low latency. The queries which have to be used to insert/update data in the database are written in yaml file. The values which have to be inserted are extracted from the json using jsonPaths defined in the same yaml configuration. Below is a sample configuration which inserts data in a couple of tables.
The above configuration is used to insert data published on the kafka topic save-pgr-request in the tables eg_pgr_service_v2 and eg_pgr_address_v2. Similarly, the configuration can be written to update data. Following is a sample configuration:
The above configuration is used to update the data in tables. Similarly, the upsert operation can be done using ON CONFLICT() function in psql. Following table describe each field in the configuration.
Variable Name
Description
serviceName
The module name to which the configuration belongs
version
Version of the config
description
Detailed description of the operations performed by the config
fromTopic
Kafka topic from which data has to be persisted in DB
isTransaction
Flag to enable/disable perform operations in Transaction fashion
query
Prepared Statements to insert/update data in DB
basePath
JsonPath of the object that has to be inserted/updated.
jsonPath
JsonPath of the fields that has to be inserted in table columns
type
Type of field
dbType
DB Type of the column in which field is to be inserted
Indexer uses a config file per module to store all the configurations pertaining to that module. Indexer reads multiple such files at start-up to support indexing for all the configured modules. In config we define source and, destination elastic search index name, custom mappings for data transformation and mappings for data enrichment. Below is the sample configuration for indexing TL application creation data into elastic search.
Variable Name
Description
serviceName
Name of the module to which this configuration belongs.
summary
Summary of the module.
version
Version of the configuration.
mappings
List of definitions within the module. Every definition corresponds to one index requirement. Which means, every object received onto the kafka queue can be used to create multiple indexes, each of these indexes will need configuration, all such configurations belonging to one topic forms one entry in the mappings list. The keys listed henceforth together form one definition and multiple such definitions are part of this mappings key.
topic
The topic on which the data is to be received to activate this particular configuration.
configKey
Key to identify to what type of job is this config for. values: INDEX, REINDEX, LEGACYINDEX. INDEX: LiveIndex, REINDEX: Reindex, LEGACYINDEX: LegacyIndex.
indexes
Key to configure multiple index configurations for the data received on a particular topic. Multiple indexes based on a different requirement can be created using the same object.
name
Index name on the elastic search. (Index will be created if it doesn't exist with this name.)
type
Document type within that index to which the index json has to go. (Elasticsearch uses the structure of index/type/docId to locate any file within index/type with id = docId)
id
Takes comma-separated JsonPaths. The JSONPath is applied on the record received on the queue, the values hence obtained are appended and used as ID for the record.
isBulk
Boolean key to identify whether the JSON received on the Queue is from a Bulk API. In simple words, whether the JSON contains a list at the top level.
jsonPath
Key to be used in case of indexing a part of the input JSON and in case of indexing a custom json where the values for custom json are to be fetched from this part of the input.
timeStampField
JSONPath of the field in the input which can be used to obtain the timestamp of the input.
fieldsToBeMasked
A list of JSONPaths of the fields of the input to be masked in the index.
customJsonMapping
Key to be used while building an entirely different object using the input JSON on the queue
indexMapping
A skeleton/mapping of the JSON that is to be indexed. Note that, this JSON must always contain a key called "Data" at the top-level and the custom mapping begins within this key. This is only a convention to smoothen dashboarding on Kibana when data from multiple indexes have to be fetched for a single dashboard.
fieldMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that has to be mapped to the fields of the index json which is mentioned in the key 'indexMapping' in the config.
inJsonPath
JSONPath of the field from the input.
outJsonPath
JSONPath of the field of the index json.
externalUriMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be enriched using APIs from the external services. The configuration for those APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
queryParam
Configuration of the query params to be used for the API call. It is a comma-separated key-value pair, where the key is the parameter name as per the API contract and value is the JSONPath of the field to be equated against this parameter.
apiRequest
Request Body of the API. (Since we only use _search APIs, it should be only RequestInfo.)
uriResponseMapping
Contains a list of configuration. Each configuration contains two keys: One is a JSONPath to identify the field from response, Second is also a JSONPath to map the response field to a field of the index json mentioned in the key 'indexMapping'.
mdmsMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be denormalized using APIs from the MDMS service. The configuration for those MDMS APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
moduleName
Module Name from MDMS.
masterName
Master Name from MDMS.
tenantId
Tenant id to be used.
filter
Filter to be applied to the data to be fetched.
filterMapping
Maps the field of input json to variables in the filter
variable
Variable in the filter
valueJsonpath
JSONPath of the input to be mapped to the variable.
Digit system supports multiple languages. To add a new language, it should be configured in MDMS.
Before proceeding with the configuration, following are the pre-requisites -
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
Users can view the web page of digit application in the language of their choice by selecting it from the available languages.
SMS and Emails of information about the transactions on digit application, can be received in languages based on the selection.
After adding the new language, the MDMS service needs to be restarted to read the newly added data.
A new language is added in StateInfo.json In MDMS, file StateInfo.json, under common-masters folder holds the details of language to be added.
The label’s text is displayed in UI for language selection. The value text is used as key to refer the language.
Language is added as an array element under the array named “languages”. Each language element is a label and value pair. By default English language is added. Other languages can be added as an additional/new language which system will support. System to support multiple ie., more than one language, those languages are added in StateInfo.json as below.
"हिंदी" and "ಕನ್ನಡ",”language3” are more than one languages(Hindi,Kannada,somelangauge) added other than "ENGLISH".
In UI the labels and master values that populates in dropdown or textboxes are added as a key for localization. For eg., when a user logs in, at the top of inbox page, a welcome message in English language shows as “Welcome User name“. The text “Welcome” is English localization for the Key “CS_LANDING_PAGE_WELCOME_TEXT”.
For all the labels or master value keys, localization should be pushed to the database through the endpoints for all the languages added in system.The SMS/Email are also added as keys for which values are pushed in all the languages to the data base.
Localization format for keys
Sample of localization
In Hindi language
In English language
For the languages added in the system if values are not pushed to database then for the labels or master data, key will appear in UI. If values for SMS/Email is missed to pushed the SMS/Email can’t be received.
Any one language from the multiple added language, can be set as default. For example if English, Hindi, Kannada are three languages added in the StateInfo.json and kannada is required to be set as a default language then in StateInfo.json for the text "defaultLanguage" the language key is need to be set as its value.
DIGIT is India's largest open-source platform for Urban Governance. It provides API based access to government functions enabling the government to provide facilities via integration with relevant service players. This document is aimed at System Integrators looking to provide bill collection facilities to their government customers using DIGIT as their governance platform. It outlines the integration approach with Billing and Collections services to enable fetching bill dues to Citizens and recording their payments into the system.
DIGIT is completely API driven and allows for data exchange with disparate systems using REST API calls. Most functional API are protected resources that can be accessed after proper authentication with the platform. The platform also checks for the right level of access for the given credentials. A bill collection flow consists of the following steps -
Authentication with DIGIT
Get Bill for the citizen using a service-specific query
Record the payment details against the bill
Optional - Get Payment API to fetch the details of the receipt
As the in-field team of the system integrator would already be making these calls to the integrator's own system (or a standard system like BBPS), integration with DIGIT is a server to server integration where the backend system of the integrator will make these calls to the DIGIT platform as per the need. The following diagram depicts the high-level flow of calls between On Field devices like PoS (On Field Device) to Backend of the Integrator (Integrator System) and from Backend of the Integrator to DIGIT (DIGIT Platform).
Note: The process of calling payment API results in a receipt creation.
DIGIT uses Swagger 2.0 as its API standard and all its APIs are documented in Swagger. Wherever needed this document will provide a link to our API documentation online. An example of typical request/response snippets necessary for integration is provided below in the respective sections. Please note that DIGIT being a multi-tenanted system, all APIs in DIGIT expect tenantid passed either in the query param or RequestBody (Please refer to detailed API documentation as indicated in sections below). The tenantid represents the modular operating unit for the operation of an API, e.g. in a municipal governance use case, a tenantid will represent one ULB. Your platform contact will help you access the configured list for your use case. Authentication API also expects tenantid (Your platform contact will help you with which one to use), however, based on the role as an integrator the OAUTH token in response can be used for unit/ULB level tenants in subsequent API calls (meaning you may not need one authentication per unit/ULB level tenant).
Authentication
To ensure data privacy and security, transactional APIs in DIGIT are protected under authentication. System integrators are requested to contact the respective state authority to get the necessary OAUTH tokens that can be used to access these APIs. Kindly note that apart from userid/password system may enforce IP based access control in which case integrator may be required to share IP or range of IPs from which the request will originate. To generate the access token based on the credentials provided, please use the following API - Given below is the example of the request and response, OAuth token to be used from the response is highlighted in Bold.
Request Snippet
Response Snippet
2. Fetching Bill
DIGIT allows the integrators to fetch the bills for citizens using the Consumer number of the respective service (e.g. Water charges, Property Service, Trade License). Please note that different services may have different notions of consumer number, e.g. for Water Charges consumer number will signify the "Connection number" while for Property it will be "Property Id". For some services, DIGIT also provides the facility to fetch bill by mobile number, please note that a bill search by mobile number may return multiple bills across services and may not return bills from services that do not support mobile-number based search. To support partial payment use case each bill in the response of the fetch bill API will indicate whether it is allowed to be partially paid and any minimum amount if partial payment is allowed. To fetch a bill from DIGIT, please ensure that OAuth token is generated as per the Authentication section above. Post that you can use the following API to fetch the bill -
Choose Billing Service from the dropdown
Go to the Bill section of BillingService
Go to the Bill tab
3. Make Payment
Once the bill is fetched from the DIGIT system, the system integrator is expected to relay it back to the Field Device. Integrator is expected to Initiate and collect the payment based on government preference indicated in the bill (can it be partially paid and if so the minimum amount etc.) and Citizen's preference of payment instrument etc. Once the payment is successfully done in the integrator's system, the integrator is expected to register the payment in DIGIT using the Payment Create API. Please note that a bill is considered unpaid/partially paid by DIGIT till appropriate receipts are created using this API - which means that a subsequent fetch of the bill, till this API is called, will return the original bill. DIGIT expects a Receipt (The result of calling payment API) to be created against the bill number returned in the fetch bill API, please note that a receipt needs to be created for each bill. Therefore, if a total payment represents multiple bills - One receipt creation per bill is expected (DIGIT supports multiple receipt creation in a single call). To create a receipt in DIGIT, please ensure that OAuth token is generated as per the Authentication section above. Post that you can use the following API to create the receipt -
Choose Collection Service from the dropdown
Go to payment
Go to the make payment
A decision support system (DSS) is a composite tool that collects, organizes and analyzes business data to facilitate quality decision-making for management, operations and planning. A well-designed DSS aids decision-makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives and business models. DSS analysis helps organizations to identify and solve problems, and make decisions.
The Tech documentation is below
The swagger API for the backend is below
Swagger API for ingest
Target Upload File Template is below
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Migration details from v1 to v2
According to the new collection service, which follows the payment structure for storing the information about payments and payment details, it is necessary to migrate the old collection structure into the payment structure.
In the old collection service, for every transaction, the receipt number is generated on the bill detail level, as the bill contains multiple bill details each transaction is mapped to multiple receipt number. So after payment of a single bill, multiple receipt numbers are generated for it. The mapping of the transactions to the receipt number is changed in the new collection service.
In the new collection service, the receipt number is generated on bill level, so for every transaction for each bill, one receipt number is generated. So each bill for a consumer code and business service have one receipt number.
The records from tables egcl_receiptheader, egcl_receiptdetails, egcl_instrument, egcl_instrumentheader need to be transferred into tables egcl_payment, egcl_paymentdetail, egcl_bill, egcl_billdetial, egcl_billaccountdetail.
For smooth transaction of data, the record from the old receipt has been mapped according to payment structure, so that the new payment response can be formed with receipt data.
The table below provides the mapping between receipt and payment structure with some remarks.
Field from Payments
Field from Receipts
Remark
Payments.Id
---
Set as UUID
Payments.tenantId
Receipt.tenantId
Payments.totalDue
---
Total Due for payment is calculated by subtracting totalAmount from bill and amount from Receipt.instrument
Payments.totalAmountPaid
Receipt.instrument.amount
Payments.transactionNumber
Receipt.instrument.transactionNumber
Payments.transactionDate
Receipt.receiptDate
Payments.paymentMode
Receipt.instrument.instrumnetType.name
Payments.instrumentDate
Receipt.instrument.instrumentDate
Payments.instrumentNumber
Receipt.instrument.instrumentNumber
Payments.instrumentStatus
Receipt.instrument.instrumentStatus
Payments.ifscCode
Receipt.instrument.ifscCode
Payments.additionalDetails
Receipt.Bill.additionalDetails
Payments.paidBy
Receipt.Bill.paidBy
Payments.mobileNumber
Receipt.Bill.mobileNumber
If mobileNumber from Receipt.bill is null it has to set with some value e.g: “NA”
Note: Payments.mobileNumber should not be null
Payments.payerName
Receipt.Bill.payerName
Payments.payerAddress
Receipt.Bill.payerAddress
Payments.payerEmail
Receipt.Bill.payerEmail
Payments.payerId
Receipt.Bill.payerId
Payments.paymentStatus
--
Based on paymentMode from Payment, the paymentStatus is set.
If paymentMode is ONLINE or CARD then paymentStatus is set to DEPOSITED otherwise it is set to NEW
Payments.auditDetails.createdBy
Receipt.auditDetails.createdBy
Payments.auditDetails.createdTime
Receipt.auditDetails.createdTime
Payments.auditDetails.lastModifiedBy
Receipt.auditDetails.lastModifiedBy
Payments.auditDetails.lastModifiedTime
Receipt.auditDetails.lastModifiedTime
Payments.paymentDetails.Id
---
Set as UUID
Payments.paymentDetails.tenantId
Receipt.tenantId
Payments.paymentDetails.totalDue
---
Total Due for paymentDetails is calculated by subtracting totalAmount from bill and amount from Receipt.instrument
Payments.paymentDetails.totalAmountPaid
Receipt.instrument.amount
Payments.paymentDetails.receiptNumber
Receipt.receiptNumber
Payments.paymentDetails.manualReceiptNumber
Receipt.Bill.billDetails.manualReceiptNumber
Payments.paymentDetails.manualReceiptDate
Receipt.Bill.billDetails.manualReceiptDate
Payments.paymentDetails.receiptDate
Receipt.receiptDate
Payments.paymentDetails.receiptType
Receipt.Bill.billDetails.receiptType
Payments.paymentDetails.businessService
Receipt.Bill.billDetails.businessService
Payments.paymentDetails.additionalDetail
Receipt.Bill.additionalDetail
Payments.paymentDetails.auditDetail
---
auditDetail for paymentDetail is same as payment auditDetail
Payments.paymentDetails.billId
---
Based on id in egbs_billdetail_v1 table billId is extracted,Where id in egbs_billdetail_v1 is Receipt.Bill.billDetails.billNumber
Payments.paymentDetails.bill
---
Based on the billid, tenantid and service the bill is search by calling the Billing service API and set it to Payments.paymentDetails.bill
Payments.paymentDetails.bil.billDetails.amountPaid
Receipt.instrument.amount
For each amountPaid in billDetails, its value is set from Receipt.instrument.amount
After the creation of payment response with receipt data, it has been pushed into kafka topic “egov.collection.migration-batch” and with the persister, payment data is inserted into tables egcl_payment, egcl_paymentdetail, egcl_bill, egcl_billdetial, egcl_billaccountdetail.
Indexer config for the legacy data index and new payments.
https://github.com/egovernments/configs/blob/master/egov-indexer/payment-indexer.yml
persister config -
Please get these promoted before initiating the migration process. Migration happens through an API call, add role-actions based on your requirement. Otherwise, port-forwarding should work.
Find the API details below:
Endpoint: /collection-services/payments/_migrate?batchSize=100&offset= Body: { "RequestInfo": { "apiId": "Rainmaker", "action": "", "did": 1, "key": "", "msgId": "20170310130900|en_IN", "ts": 0, "ver": ".01", "authToken": "a6ad2a1b-821c-4688-a70e-4322f6c34e54" }
While restarting migration due to any failure, take the value of offset and tenantId printed in the logs and resume the migration process where it ended.
/collection-services/payments/_migrate?batchSize=100&offset=200&tenantId='pb.tenantId'
Collection-service build:- collection-services-db:9-COLLECTION_MIGRATION-e9701c4
Reap benefit system is one of the vendors that provide the chatbot services using the turn as backend services to communicate with citizen through chatbot. As part of the requirement, we need to create a complaint in digit platform when ever citizen has raised the complaint through reap benefit chatbot.
turn-io-adapter service is a wrapper to transform reap benfit request format to digit pgr request format. this service will have _transform api and it will construct requried pgr request from the request message sent by reap benfit system. Reap benfit system will consume _tranform api to communicate with digit pgr mdoule.
In this process, once a complaint is created it sends a Whatsapp message to the citizen with a track link. Whenever some action taken by ULB employes on complaint, we will send whatsapp message to citizen.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Rainmaker-PGR service is running
Complaint can create in digit platform from reapbenfit system chatbot
message will sent to citizen through whatsapp when employee perform some action on complaint
Please deploy the following builds
rainmaker-pgr-db:v1.1.3-bb2961cf-13
turn-io-adapter:v1.1.3-bb2961cf-19
egov-searcher:v1.1.3-d43c421c-5
nlp-engine:v1.0.0-c3889d14-10
Note: Please refer to the following url for nlp-engine techical documentation.
Frontend commits
1) turn-io-adapter: "http://turn-io-adapter.egov:8080/" (In service host configuration)
2) Add /turn-io-adapter/_transform in egov-mixed-mode-endpoints-whitelist configuration
3) Once you are done with 2nd step restart zuul pod
We need to add name filed in complaint category master in pgr. Please find the below link for data.
Push the localization data for all the locality data with module as rainmaker-chatbot. Please find the below sample localization object.
{ "code": "SC1", "message": "Azad Nagar - WARD_1", "module": "rainmaker-chatbot", "locale": "en_IN" }
NA
This is the samplerequest for _transform api to create a complaint
Turn-io-adapter will be integrated with Rainmaker-pgr Application. Turn-io-adapter Application internally invokes the rainmaker-pgr service to generate the complaint.
Turn-Io-adapter application to call turn-io-adapter/_transform
to generate the complaint and takes the data from the pgr.
This section contains docs that walk you through the various steps required to configure DIGIT services.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This section walks you through the steps to adding a new language or setting up the default language on the DIGIT system.
All content on this page by is licensed under a .
Always define the Yaml for your APIs as the first thing using Open API 3 Standard ()
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in the POST body only
If query params for search need to be supported then make sure to have the same parameters in POST body also and POST body should take priority over query params
Provide additional Details objects for _create and _update APIs so that the custom requirements can use these fields
Each API should have a object in request body at the top level
Each API should have a object in response body at the top level
Mandatory fields should be minimum for the APIs.
minLength and maxLength should be defined for each attribute
Read-only fields should be called out
Use common models already available in the platform in your APIs. Ex -
(Citizen or Employee or Owner)
(Response sent in case of errors)
TODO: Add all the models here
For receiving files in an API, don’t use binary file data. Instead, accept the file store ids
If there is only one file to be uploaded and no persistence is needed, and no additional json data is to be posted, you can consider using direct file upload instead of using filestore id
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following these principles.
Through SMS and Emails necessary information/updates are communicated to the users on their various transactions on DIGIT applications. For example, when a Trade License application is initiated or forwarded or approved or payment is done in DIGIT system, the applicant and payer (if the payer is other than the applicant) will be informed about the status of Trade License application through SMS/Email. The language for SMS and Email can be set as per requirement/choice.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
User can receive Emails and SMS of necessary information/updates in the decided language.
The language can be decided by the end-users (either Citizen or Employee). End-users can select the language before logging in or after logging, from inbox page.
If the language is not chosen by end-user, then SMS/Email is received in the language of, State requirement based state-level configured language.
Sms and Email localization should be pushed to the database through the endpoints for all the languages added in the system. Localization format for SMS/Email
Sample of SMS localisation for Trade License application initiation English localization
Hindi localization
The placeholder <1>,<2>,<3> will the replaced by the actual required value which gives important information to the applicant. For example**,** the message will be received by the applicant as: Dear Kamal, Your Trade License application number for Ramjhula Provisional Store has been generated. Your application no. is UK-TL-2020-07-10-002058 You can use this application number….
The default language for SMS and Email can be set by
Clicking on the preferred language from the available language button, in language selection page, which opens before the login page.
In Citizen or Employee inbox page, the language can be selected from the drop-down, which can be seen in the right corner of the inbox title bar.
If the language is not chosen by Citizen or Employee, then SMS/Email is received in default configured language. For example in a State if Hindi, English, Kannada are added as three languages in the system and out of these three languages if State decides that Kannada should be configured as default language then Kannada is set as the default language in MDMS. So when end-user does not choose any language then SMS/Email is sent in Kannada language.
The selected language key is sent as a parameter along with other required transaction parameters to the back end code.
In the back end, to send SMS/Email logic, language key is checked and based on the language key and SMS unique key, the message is fetched from the database.
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
.
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Table chart visualization is having normal material UI data table features like search, sort etc.
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
curl -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: d83fc136-116d-265f-3b83-ea41e3d5bb57' -d '{"RequestInfo":{"authToken":"2ba70924-1bba-4a9b-b55d-2e9471bf3081"}}'
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Title
Link
StateInfo.json
The departments are defined as different sections within the ULB based on which the functions performed by ULBs and employees in ULB are grouped. The budgets details of the ULBs are also defined by the department. It is suggested that the ULB across the state adopt the same department naming terminology. This document will help you in filling the department detail in the template provided.
1
ACC
Accounts
लेखा
2
PHS
Public Health And Sanitation
सार्वजनिक स्वास्थ्य और स्वच्छता
3
REV
Revenue
राजस्व
4
TP
Town Planning
नगर नियोजन
Data given in the table is a sample data.
1
Department Code*
Alphanumeric
64
Yes
Unique code for the department to identify a department
2
Department Name ( In English)*
Text
256
Yes
The name of the department in the ULB in English
3
Department Name (In Local Language)*
Text
256
Yes
The name of the department working in the ULB in local language e.g. Telugu, Hindi etc. whichever is applicable
Download the data template attached to this page.
Have it open and go through all the headers and understand the meaning given in this document under section 'Data Definition'.
Make sure all the headers, its data type, field size and its definition/ description are understood properly.
In case of any doubt, please reach out to the person who has shared this document with you to discuss the same and clear out the doubts.
Identify all the departments in ULB well before start filling then into the template.
Start filling the data starting from serial no. and complete a record at once. repeat this exercise until the entire data is filled into a template.
Verify the data once again by going through the checklist and taking care of each and every point mentioned in the checklist.
The checklist is a set of activities to be performed after the data is filled into a template to ensure data type, size, and format of data is as per the expectation. These activities have been divided into 2 groups as given below.
This checklist covers all the activities which are common across the entities.
To see the common checklist refer to the Checklist page consisting of all the activities which are to be followed to ensure complete and quality data.
This checklist covers the activities which are specific to the entity. There is no entity-specific checklist is applicable for this entity.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.