Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Details coming soon...
Details coming soon...
Indexer uses a config file per module to store all the configurations pertaining to that module. Indexer reads multiple such files at start-up to support indexing for all the configured modules. In config we define source and, destination elastic search index name, custom mappings for data transformation and mappings for data enrichment. Below is the sample configuration for indexing TL application creation data into elastic search.
Variable Name
Description
serviceName
Name of the module to which this configuration belongs.
summary
Summary of the module.
version
Version of the configuration.
mappings
List of definitions within the module. Every definition corresponds to one index requirement. Which means, every object received onto the kafka queue can be used to create multiple indexes, each of these indexes will need configuration, all such configurations belonging to one topic forms one entry in the mappings list. The keys listed henceforth together form one definition and multiple such definitions are part of this mappings key.
topic
The topic on which the data is to be received to activate this particular configuration.
configKey
Key to identify to what type of job is this config for. values: INDEX, REINDEX, LEGACYINDEX. INDEX: LiveIndex, REINDEX: Reindex, LEGACYINDEX: LegacyIndex.
indexes
Key to configure multiple index configurations for the data received on a particular topic. Multiple indexes based on a different requirement can be created using the same object.
name
Index name on the elastic search. (Index will be created if it doesn't exist with this name.)
type
Document type within that index to which the index json has to go. (Elasticsearch uses the structure of index/type/docId to locate any file within index/type with id = docId)
id
Takes comma-separated JsonPaths. The JSONPath is applied on the record received on the queue, the values hence obtained are appended and used as ID for the record.
isBulk
Boolean key to identify whether the JSON received on the Queue is from a Bulk API. In simple words, whether the JSON contains a list at the top level.
jsonPath
Key to be used in case of indexing a part of the input JSON and in case of indexing a custom json where the values for custom json are to be fetched from this part of the input.
timeStampField
JSONPath of the field in the input which can be used to obtain the timestamp of the input.
fieldsToBeMasked
A list of JSONPaths of the fields of the input to be masked in the index.
customJsonMapping
Key to be used while building an entirely different object using the input JSON on the queue
indexMapping
A skeleton/mapping of the JSON that is to be indexed. Note that, this JSON must always contain a key called "Data" at the top-level and the custom mapping begins within this key. This is only a convention to smoothen dashboarding on Kibana when data from multiple indexes have to be fetched for a single dashboard.
fieldMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that has to be mapped to the fields of the index json which is mentioned in the key 'indexMapping' in the config.
inJsonPath
JSONPath of the field from the input.
outJsonPath
JSONPath of the field of the index json.
externalUriMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be enriched using APIs from the external services. The configuration for those APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
queryParam
Configuration of the query params to be used for the API call. It is a comma-separated key-value pair, where the key is the parameter name as per the API contract and value is the JSONPath of the field to be equated against this parameter.
apiRequest
Request Body of the API. (Since we only use _search APIs, it should be only RequestInfo.)
uriResponseMapping
Contains a list of configuration. Each configuration contains two keys: One is a JSONPath to identify the field from response, Second is also a JSONPath to map the response field to a field of the index json mentioned in the key 'indexMapping'.
mdmsMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be denormalized using APIs from the MDMS service. The configuration for those MDMS APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
moduleName
Module Name from MDMS.
masterName
Master Name from MDMS.
tenantId
Tenant id to be used.
filter
Filter to be applied to the data to be fetched.
filterMapping
Maps the field of input json to variables in the filter
variable
Variable in the filter
valueJsonpath
JSONPath of the input to be mapped to the variable.
Digit system supports multiple languages. To add a new language, it should be configured in MDMS.
Before proceeding with the configuration, following are the pre-requisites -
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
Users can view the web page of digit application in the language of their choice by selecting it from the available languages.
SMS and Emails of information about the transactions on digit application, can be received in languages based on the selection.
After adding the new language, the MDMS service needs to be restarted to read the newly added data.
A new language is added in StateInfo.json In MDMS, file StateInfo.json, under common-masters folder holds the details of language to be added.
The label’s text is displayed in UI for language selection. The value text is used as key to refer the language.
Language is added as an array element under the array named “languages”. Each language element is a label and value pair. By default English language is added. Other languages can be added as an additional/new language which system will support. System to support multiple ie., more than one language, those languages are added in StateInfo.json as below.
"हिंदी" and "ಕನ್ನಡ",”language3” are more than one languages(Hindi,Kannada,somelangauge) added other than "ENGLISH".
In UI the labels and master values that populates in dropdown or textboxes are added as a key for localization. For eg., when a user logs in, at the top of inbox page, a welcome message in English language shows as “Welcome User name“. The text “Welcome” is English localization for the Key “CS_LANDING_PAGE_WELCOME_TEXT”.
For all the labels or master value keys, localization should be pushed to the database through the endpoints for all the languages added in system.The SMS/Email are also added as keys for which values are pushed in all the languages to the data base.
Localization format for keys
Sample of localization
In Hindi language
In English language
For the languages added in the system if values are not pushed to database then for the labels or master data, key will appear in UI. If values for SMS/Email is missed to pushed the SMS/Email can’t be received.
Any one language from the multiple added language, can be set as default. For example if English, Hindi, Kannada are three languages added in the StateInfo.json and kannada is required to be set as a default language then in StateInfo.json for the text "defaultLanguage" the language key is need to be set as its value.
Title
Link
StateInfo.json
To use the generic GET/POST SMS gateway, first, configure the service application properties
sms.provider.class=Generic
This will set the generic interface to be used. This is the default implementation, which can work with most of the SMS Provider. The generic implementation supports below
GET or POST based API
Supports query params, form data, JSON Body
To configure the URL of the SMS provider use sms.provider.url property.
To configure the http method used configure the sms.provider.requestType property to either GET or POST.
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively
To configure which data needs to be sent to the API below property can be configured:
sms.config.map={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map={'extraParam': 'abc'}
sms.extra.config.map is not used currently and is only kept for custom implementation which requires data that doesn't need to be directly passed to the REST API call
sms.config.map is a map of parameters and their values
Special variables that are mapped
$username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in environment variable with full upper case and _ replacing -, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}. Then the API call will be passed <url>?u=<$username>&p=password
Message success delivery can be controlled using below properties
sms.verify.response (default: false)
sms.print.response (default: false)
sms.verify.responseContains
sms.success.codes (default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true and sms.verify.responseContains to the text that should be contained in the response
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using the below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a , separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX will blacklist any phone number starting with 5, or the exact number 9999999999 and all numbers starting from 8888888800 to 8888888899
Few 3rd parties require a prefix of 0 or 91 or +91 with the mobile number. In such a case you can use sms.mobile.prefix to automatically add the prefix to the mobile number coming in the message queue.
Persister Service persists data in the database in a sync manner providing very low latency. The queries which have to be used to insert/update data in the database are written in yaml file. The values which have to be inserted are extracted from the json using jsonPaths defined in the same yaml configuration. Below is a sample configuration which inserts data in a couple of tables.
The above configuration is used to insert data published on the kafka topic save-pgr-request in the tables eg_pgr_service_v2 and eg_pgr_address_v2. Similarly, the configuration can be written to update data. Following is a sample configuration:
The above configuration is used to update the data in tables. Similarly, the upsert operation can be done using ON CONFLICT() function in psql. Following table describe each field in the configuration.
Variable Name
Description
serviceName
The module name to which the configuration belongs
version
Version of the config
description
Detailed description of the operations performed by the config
fromTopic
Kafka topic from which data has to be persisted in DB
isTransaction
Flag to enable/disable perform operations in Transaction fashion
query
Prepared Statements to insert/update data in DB
basePath
JsonPath of the object that has to be inserted/updated.
jsonPath
JsonPath of the fields that has to be inserted in table columns
type
Type of field
dbType
DB Type of the column in which field is to be inserted
Every Service integrated with egov-workflow-v2 service needs to first define the workflow configuration which describes the states in the workflow, the action that can be taken on this states, who all can perform those action, SLA etc. This configuration is created using API’s and is stored in DB. The configuration can be created either state level or tenant level based on the requirements.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2 service is up and running
Role Action mapping is added for the BusinessService API’s
Create and modify workflow configuration
Configure State level as well BusinessService level SLA
Control access to workflow actions from the configuration
Validates if the flow defined in the configuration is complete during the creation
Deploy the latest version of egov-workflow-v2 service
Add Role-Action mapping for BusinessService API’s (preferably add _create and update only for SUPERUSER. search can be added for CITIZEN and required employee roles like TL__CEMP etc. )
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Add businessService persister yaml path in persister configuration
Create the businessService JSON based on product requirement. Following is a sample json of a simple 2 step workflow where an application can be applied by citizen or counter employee and then can be either rejected or approved by the approver.
Once the businessService json is created add it in the request body of _create API of workflow and call the API to create the workflow.
To update the workflow first search the workflow object using _search API and then make changes in the businessService object and then call _update using the modified search result. (States cannot be removed using _update API as it will leave applications in that state in an invalid state. In such cases first, all the applications in that state should be moved forward or backward state and then the state should be disabled through DB directly)
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient and increase accountability.
Integrating with workflow service provides a way to have a dynamic workflow configuration which can be easily modified according to the changing requirements. The modules don’t have to deal with any validations regarding workflow such as authorisation of the user to take an action if documents are required to be uploaded at certain stage etc. as they will be automatically handled by egov-workflow-v2 service based on the configuration defined. It also automatically keeps updating SLA for all applications which provide a way to track the time taken by an application to get processed.
To integrate, host of egov-workflow-v2 should be overwritten in helm chart
/egov-workflow-v2/egov-wf/businessservice/_search should be added as the endpoint for searching workflow configuration. (Other endpoints are not required once workflow configuration is created)
The configuration can be fetched by calling _search API
Title
Link
Workflow Service Documentation
Link
_create
_update
_search
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
__
This documentation talks about building a new dashboard in the DSS and also it defines the configurations required for the analytics service. Analytics microservice which is responsible for building, fetching, aggregating, and computing the data on ElasticSearch to a consumable data response. Which shall be later used for visualizations and graphical representations.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge on JSON
Prior Knowledge on Elasticsearch Query Language
Prior Knowledge on Kibana
DSS setup
Adding new Roles for Dashboards
Adding a new Dashboard
Adding new Visualizations in existing Dashboard
Adding new charts for visualizations :
To add a new role, We have to make changes in the RoleDashboardMappingsConf.json (roles node) configuration file has to be modified as below. In the roles array, every JSON object is unique based on the id. The name of the role is defined in the roleName attribute.
If we want to assign any dashboard to a particular role, Add the id and name of the dashboard in the dashboards array. This dashboard id is unique and it’s referred to as the MasterDashboardConfig.json file configuration.
Any number of roles & dashboards can be added
Below is a sample to add a new role object
2. Adding a new Dashboard
To add a new dashboard, We have to make changes in the MasterDashboardConfig.json (dashboards node) that has to be modified as below.
Add the new JSON object in the dashboards array. Add the dashboard name in the name attribute, Id should be unique, which is used for assigning a role for the dashboard. We will talk about visualizations below.
Dashboards array add a new dashboard as given below
To add new visualizations, We have to make changes again in the MasterDashboardConfig.json (vizArray node) that has to be modified as below. Add the visualization name to the name attribute. We will add all the visualizations in the vizArray array.vizArray will contain the name of the visualization, vizType as visual type, noUnit, and charts.
charts array contains chart API configuration query details. The id is referred to as the chartapiconfig.json file’s key to fetch the required data from elastic search’s index. And the name attribute is referred to as the name of the chart in localization.
To add a new chart, chartApiConf.json has to be modified as shown below. A new chartid (key of the JSON) has to be added with the chart node object. In the chartid JSON contains the chart name, chart type, valueType, documentType, aggregationPaths and queries attribute.
Types of the chart: Metric, Pie, Line, Table, and xtable
AggregationPaths: Query result will take from this path.
valueType: Based on the value type result will be shown in the UI. Different types of valueType are Amount, percentage, and number.
queries array will contain the information of the module, requestQueryMap (request param of the API), dateRefField (Based on this field date data will be filtered), indexName, and aggrQuery. We can add multiple modules queries in a single chart.
For more information please refer the reference documents listed below.
Through report service, useful data get shown for a specific module based on some given criteria like date, locality, financial year, etc.
For example, PT dump report of property tax service you have to select from date to date, financial year etc and based on the criteria we can see all the data full filling the criteria. In the response we see all the details of a property which is paid between the given from date and to date, if we selected financial year then we can see the property which is paid for that specific financial year.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permissions to edit the git repository where Reports are configured and knowledge on YAML.
Prior Knowledge of YAML.
Prior Knowledge of SQL queries.
Prior Knowledge of the relation between the tables for which module you are going to write a report.
User can write queries (like SQL queries) for fetching the real-time data to display in a UI application.
User can apply filters like from date, to date, financial year, etc based on the report configuration.
User can download the result in PDF and XLS format.
User can select or deselect the columns user wants to see.
User can choose the number of records he/she wants to see on a page.
Once the changes have been done in the report configuration file we have to restart the report service so the report service will read the new configuration.
<Module Name>=file:///work-dir/configs/reports/config/<report file name>.yml
ex: pgr=file:///work-dir/configs/reports/config/pgr-reports.yml
Write the report configuration. Once it is done commit those changes.
Add the role and actions for the new report.
Restart the MDMS and report service.
Rainmaker has report framework to configure new reports. As part of the report configuration, we have to write a native SQL query to get the required data for the report. So if the query takes huge time to execute or query result has huge data, then it will impact on the whole application performance.
The following cases where we can see the application performance issue because of heavy reports.
Filtering with long date range data or applying fewer filters which in turns return huge data
Join the multiple tables for getting required data and missing creating index on join columns
Implementing conditional logic inside the queries itself
Writing multiple sub-queries inside a single query for getting required data
Because of heavy reports, the following things will impact the platform
When we execute a complex query on the database, thread from connection pool will block to execute the query
When threads from connection pool are blocked completely, the application will become very slow for incoming requests
When max request timeout is crossed, API gateway will return timeout error, But still, connection thread on the database is active, Then all these types of idle threads will occupy database resources like memory, CPU which in turns increase the load on the database
Some times when running huge queries, because of time taken by the query will lead to broken pipe issue which causes more memory leaks and out of heap memory type issues. Because of this, the service will frequently restart automatically.
If a query returns huge data, the browser will become unresponsive and application will become unresponsive.
A decision support system (DSS) is a composite tool that collects, organizes, and analyzes business data to facilitate quality decision-making for management, operations, and planning. A well-designed DSS aids decision-makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives, and business models. DSS analysis helps organizations to identify and solve problems, and make decisions
This document explains the steps on how to define the configurations & set up for the new dashboard in the DSS.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Spring boot
Prior Knowledge of Kafka
Prior Knowledge of Elastic Search
Prior Knowledge of Kibana
Prior Knowledge of EQL (Elastic Query Language)
Prior Knowledge of JSON
Creating a DSS dashboard schema
DSS ingest service APIs
Ingest service configurations
Creating Kafka sync connector to push the data to Elastic search
When we are going to start indexing the DSS collection v2 index. We should create the schema in the ES using the Kibana query as there in the below file.
2. DSS ingest service API
3. Ingest service configurations
Transform collection schema for V2
This transform collection v1 configuration file is used to map with the incoming data. This mapped data will go inside the data object in the DSS collection v2 index.
Here: $i, the variable value that gets incremented for the number of records of paymentDetails.
$j, the variable value that gets incremented for the number of records of billDetails.
Enrichment Domain Configuration
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
Topic Context Configuration
Topic Context Configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service is asked to receive those data and pass it through the pipeline, the context and the version of the data being received has to be set. This configuration is used to identify as in which Kafka topic consumed the data and what is the mapping for that.
JOLT Domain Transformation Schema
JOLT is a JSON to JSON Transformation Library. In order to change the structure of the data and transform it in a generic way, JOLT has been used.
While the transformation schemas are written for each Data Context, the data is transformed against the schema to obtain transformed data.
Validator Schema
Validator Schema is a configuration Schema Library from Everit Bypassing the data against this schema, it ensures whether the data abides by the rules and requirements of the schema which has been defined.
Enhance Domain configuration
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked and the final data is placed inside the domain object.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
For Kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
If DSS collection index data is indexing directly ( without Kafka connector) to ES through the ingest pipeline then, make the application properties or in environments, direct push must be enabled.
es.push.direct=true
4. Creating a Kafka sync connector to push the data to the Elasticsearch
Configure the Kafka topics in the environments or Ingest pipeline application properties as shown below.
To Start the indexing we will create a connecter that will take data from the topic and push it to the index we have mentioned in the "transforms.TopicNameRouter.replacement" and mention the ES host in the Kafka connection we have to mention the host URL in “connection.url”.
To create the Kafka connector run the below curl command inside the playground pod:
In a bid to avoid unnecessary repetition of codes to generate ids and to have centralized control over the logic so that burden of maintenance is reduced from the developers. To create a config based application that can be used without writing even a single line of coding.
Prior Knowledge of Java/J2EE
Prior Knowledge of Spring Boot, Flyway
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
The application exposes a Rest API to take in requests and provide the ids in response in the requested format. The request format varies from current date information, tenantId*, random number, sequence generated number. Id can be generated by providing a request with any of the above-mentioned information.
*TenantId - is the string representing the individual units called tenants in the DIGIT system which can be a city, town or village.
For instance: An Id Amritsar-PT-2019/09/12-000454-99 contains
Amritsar - which is the name of the city
PT - fixed string representing the module code(PROPERTY TAX)
2019/09/12 - date
000454 - sequence generated number
99 - random number
The id generated in the above-mentioned example needs the following format
[city]-PT-[cy:yyyy/mm/dd]-[SEQ_SEQUENCE_NAME]-[d{4}]
Everything in the square brackets is replaced with the appropriate values by the app.
ID-FORMAT COFNIGURATION
By default, the IDGen service reads the configuration from MDMS. DB Configuration requires access to the DB, so the new preferred method for the configuration is MDMS. The configuration needs to be stored in common-masters/IdFormat.json in MDMS
It is recommended to have the IdFormat as a state-level master. To disable the configuration to be read from DB instead, the environment variable IDFORMAT_FROM_MDMS is set to false.
ID-FORMAT-REPLACEABLE
[FY:] - represents the financial year, the string is replaced by the value of starting year and the last two numbers of the ending year separated by a hyphen. For eg: 2018-19 in the case of the financial year 2018 to 2019.
[cy:] - any string that starts with cy is considered as the date format. The values after the cy: is the format using which output is generated.
[d{5}] - d represents the random number generator, the length of the random number can be specified in flower brackets next to d. If the value is not provided then the random length of 2 is assigned by default.
[city] - The string city is replaced by the city code provided by the respective ulb in the location services.
[SEQ_*] - String starting with seq is considered as sequence names, which gets queried to get the next seq number. If the sequence doesn’t start with the namespace containing “SEQ” then the application is not considered as a sequence. In absence of the sequence, the system throws a DB error.
[tenantid] - replaces the placeholder with the tenantid passed in the request object
[tenant_id] - replaces the placeholder with the tenantid passed in the request object. Replaces all `.` with `_`
[TENANT_ID] - replaces the placeholder with the tenantid passed in the request object. Replaces all `.` with `_`, and changes the case to upper case
idName v/s format
When you use both idName and format in a request. IDGEN first checks if the format for the given idName exists. If it does not exist, then the default format is used.
STATE v/s ULB LEVEL SEQUENCES If you want a state-level sequence, then you need to use a fixed sequence name
{
"format": "PT/[CITY.CODE]/[fy:yyyy-yy]/[SEQ_RCPT_PT_RECEIPT]",
}
But if you want a ULB level sequence, the sequence name should be dynamic based on the tenantid as given in the below example
{
"format": "PT/[CITY.CODE]/[fy:yyyy-yy]/[SEQ_RCPT_PT_[TENANT_ID]]",
}
SEQUENCES AND THEIR CREATION
The SEQ_* replaceable used in id generation is by default expected to use the sequence that already exists in the DB. But this behaviour can be changed and controlled using two environment variables while deploying the service.
AUTOCREATE_NEW_SEQ: Default is set to false. When set to true, this auto-creates sequences when the format has been derived using provided idName. Since the idName format comes from DB or MDMS, it is a trusted value and this value is set to true. This makes sure that DB configuration is not required as long as MDMS has been configured. It is recommended that each service using idgen should generate id using idName instead of just using passing the format directly. This makes sure that DB configuration is not required for creating sequences.
AUTOCREATE_REQUEST_SEQ: Default is set to false. When set to true, this auto-creates sequences when the format has been derived using the format parameter from the request. It is recommended to keep this setting to false since anyone with access to idgen can create any number of sequences in DB and overload the DB. Though during the initial setup of an environment this variable can be set to true to create all the sequences when the initial flows are run from the UI and to generate the sequences. And afterwards, the flags should be disabled.
Add MDMS configs required for ID Gen Service and restart the MDMS service
Deploy the latest version of ID Generation Service
Add Role-Action mapping for APIs
The ID Gen service is used to generate unique ID numbers for all miscellaneous / adhoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of generating the unique identifier of the entity calling ID Gen service.
To integrate, a host of idgen-services modules should be overwritten in the helm chart.
/egov-idgen/id/_generate
should be added as the endpoint for generating ID numbers in the system
Roles define the permissions of a user to perform a group of tasks. The tasks are created as API calls to do certain actions when a request for those calls is sent by the system. For example, the key tasks for a Trade License application include initiate/apply, forward, approve or payment. For Trade License initiate two API calls, “create” and “update” is required. Create API creates and save the application in the database and return an application number. Update API saves the required attached documents in the file store and return the success acknowledgement message of the application created. These create and update API access permission is granted to the roles named Citizen and TL counter employee. Access permission is grated by mapping roles with API. User assigned with the roles Citizen or TL counter employee can initiate/apply the Trade License application.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
Mapping Roles with APIs, permission to perform a certain task can be restricted based on the requirement. For example, only the user with Role TL Counter Employee or Citizen can initiate the Trade License applications.
After mapping Roles with APIs, the MDMS service needs to be restarted to read the newly added data.
APIs are added in actions-test.json and called as action. In MDMS, file actions-test.json, under ACCESSCONTROL-ACTIONS-TEST folder APIs are added. API Sample:
APIs are added as action array element with the request url and other required details for the array "actions-test"
Each action is defined as a key-value pair:
Mapping of Roles and APIs/action is added in roleactions.json, under the folder ACCESSCONTROL-ROLEACTIONS. Sample mapping:
Role and API/action mapping is added as an array element under array roleactions. Each mapping is defined with key-value pairs. keys are rolecode, actionid, actioncode and tenantId.
An eGov core application that handles uploading different kinds of files to servers including images and different document types.
Prior Knowledge of Java/J2EE
Prior Knowledge of Spring Boot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior knowledge of AWS and Azure
The filestore application takes in a request object which contains an image/document or any kind of file and stores them in a disk/AWS/azure. Depending upon the configurations, additional implementations can be written for the app interface to interact with any other remote storage.
The requested file to be uploaded is taken in form of a multi-part file then saved to the storage and a uuid is returned as a unique identifier for that resource. This is used to fetch the documents later.
In the case of images, the application creates three more additional copies of the file in the likes of large, medium and small for the usage of thumbnails or low-quality images in the case of mobile applications.
The search API takes the uuid, tenantid as mandatory url params and a few optional parameters and returns the presigned url of the files from the server. In the case of images, a single string containing multiple urls separated by commas is returned representing different sizes of images stored.
The Application is present among the core group of applications available in the eGov-services git repository. The spring boot application needs lombok extension added in your ide to load it. Once the application is up and running API requests can be posted to the URL and ids can be generated.
In case of intellij, the plugin can be installed directly, for eclipse the lombok jar location has to be added in eclipse.ini file in this format -javaagent:lombok.jar.
The application needs at least one type of storage available for it to store the files. It can be either file storage, AWS S3 or azure. More storage types can be added by extending the application interface too.
To work any of the file storage there are some application properties that need to be configured.
DiskStorage
The mount path of the disk should be provided in the following variable to save files in the disc. file.storage.mount.path=path.
Following are the variables that needs to be populated based on the aws/azure account you are integrating with.
How to enable Minio SDC
isS3Enabled=true(Should be true)
aws.secretkey={minio_secretkey}
aws.key={minio_accesskey}
fixed.bucketname=egov-rainmaker(Minio bucket name)
minio.source=minio
How to enable AWS S3
isS3Enabled=true(Should be true)
aws.secretkey={s3_secretkey}
aws.key={s3_accesskey}
fixed.bucketname=egov-rainmaker(S3 bucket name)
minio.source=minio
AZURE
isAzureStorageEnabled - informing the application whether Azure is available or not
azure.defaultEndpointsProtocol - type of protocol https
azure.accountName - name of the user account
azure.accountKey - secret key of the user account
NFS
isnfsstorageenabled-informing the application whether NFS is available or not <True/False>
file.storage.mount.path - <NFS location, example /filestore>
source.disk - diskStorage - name of storage
disk.storage.host.url=<Main Domain URL>
Allowed formats to be uploaded
# the default format of the allowed file formats goes in a set bracket with string inside it - {"jpg","png"} - please follow the same.
allowed.formats.map={jpg:{'image/jpg','image/jpeg'},jpeg:{'image/jpeg','image/jpg'},png:{'image/png'},pdf:{'application/pdf'},odt:{'application/vnd.oasis.opendocument.text'},ods:{'application/vnd.oasis.opendocument.spreadsheet'},docx:{'application/x-tika-msoffice','application/x-tika-ooxml','application/vnd.oasis.opendocument.text'},doc:{'application/x-tika-msoffice','application/x-tika-ooxml','application/vnd.oasis.opendocument.text'},dxf:{'text/plain'},csv:{'text/plain'},txt:{'text/plain'},xlsx:{'application/x-tika-ooxml','application/x-tika-msoffice'},xls:{'application/x-tika-ooxml','application/x-tika-msoffice'}}
The key in the map is the visible extension of the file types, the values on the right in curly braces are the respective tika types of the file. these values can be found on the tika website or by passing the file through the tika functions.
Upload POST API to save the files on the server
Search Files GET API to retrieve file based only on id and tenantid
Search URLs GET API to retrieve pre-signed urls for a given array of ids
Deploy the latest version of Filestore Service
Add Role-Action mapping for APIs
The filestore service is used to upload and store documents that citizens add while availing services from ULBs.
Can perform file upload independently without having to add file upload specific logic in each module.
To integrate, a host of filestore modules should be overwritten in the helm chart.
/filestore/v1/files
should be added as the endpoint for uploading the file in the system
/filestore/v1/files/url
should be added as the search endpoint. This method handles all requests to search existing files depending on different search criteria
Reporting Service is a service running independently on a separate server. The main objective of this service is to provide a common framework for generating reports. This service loads the report configuration from a yaml file at the run time and provides the report details by using a couple of APIs.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Advanced Knowledge of PostgreSQL.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
JSONPath for filtering required data from json objects.
Provides an easy way to add reports on the fly just by adding configurations without any coding effort.
Provides flexibility to customise result column names in the config.
Provides flexibility to fetch data from DB and also from some other service returning required json objects when it is not possible to get all required data from DB.
Provides functionality to add filters as per requirements before actually fetching data for the report.
Config file
A YAML (xyz.yml) file contains configuration for report requirements.
API
A REST endpoint to fetch data based on the configuration.
Inline-table
If we also want to show data from some external service with data coming from DB in reports we use inline-tables. The data from external service is stored in inline-table and then used as any normal DB table to get data. This table is short lived and stays only for the time when a query is being executed. It is never stored in DB. We provide JSON paths in an ordered manner corresponding to each column in the table. These JSON paths will be used to extract required data from the external service’s response. For configs please see ‘How to Use’ section.
Configuration
As mentioned above, report service uses a config file per module to store all the configurations of reports pertaining to that module. Report service reads multiple such files at start-up to support reports of all the configured modules. The file contains the following keys:
reportName: name of report, to be used with module name to identify any report config
summary: summary of report
version: version of the report
moduleName: name of the module to which the report belongs to
externalService: To be used when some of the report data needs to be fetched from external service through inline-tables. It contains the following fields
entity: JSON Path to filter json arrays(result to be turned into tables) from returned json
apiURL: API URL of the external service
keyOrder: order of JSON object keys to form table columns from JSON object arrays
tableName: name to be given to represent this transformed data which will be used as a table in the SQL query
sourceColumns : These represent the final data sent by service on GET_DATA API call. The order of sourceColumns in the Config is the same as that of columns in the result. Each sourceColumns represent one column in the result. For each column, data is picked after executing the final SQL query formed after appending groupby, orderby, search params into base query
name: name of the column to fetch data from query results, must be there in query results
label: custom column label
type: data type of column
source: module name
total: whether column total required on the front end
searchParams:
name: name of search param. Must match variable used in search clause
label: a custom label for viewing on the front end
type: type of search params. If type is ‘singlevaluelist’ then use pattern to populate searchparams possible values to select from by the user Ex:-number,string,singlevaluelist etc
source: module name
isMandatory: If the user must fill this searchparam before requesting report data
searchClause: SQL search clause for corresponding search params to filter results, to be appended in base query Ex:- AND fnoc.tenantId IN ($ulb) Here $ulb will be replaced by user inputs
Query: Main/base query clause for fetching report data from DB and custom tables formed after fetching data from external service
Orderby: order by clause to be appended into base query
Groupby: group by clause to be appended into base query
additionalConfig: to provide additional custom configs which are not present above
Call the MDMS or any other API with the post method
Configuring the post object in the yaml itself like below. externalService:
entity: $.MdmsRes.egf-master.FinancialYear
keyOrder: finYearRange,startingDate,endingDate,tenantId
tableName: tbl_financialyear
stateData: true
postObject:
tenantId: $tenantid
moduleDetails:
moduleName: egf-master
masterDetails:
name: FinancialYear filter: "[?(@.id IN [2,3] && @.active == true)]"
Keep the post object in a separate json file externally and call at runtime.
There are two API calls to report service ‘GET METADATA’ and ‘GET DATA’.
GET METADATA
This request to report service is made to get metadata for any report. The metadata contains information about search filters to be used in the report before actually sending request to get actual data. The user selected values are then used in GET_DATA request to filter data.
endpoint: /report/{moduleName}/{report name}/metadata/_get
moduleName:- It is used to define the names of module which contains current report
Body: The Body consists of the following:
RequestInfo: Header details as used on the egov platform
tenantId: tenantId of ULB
reportName: name of the report to be used
Instance
Body
GET DATA
This request to report service is used to get data for the report. Inputs given by the user for filters are sent in the request body. These filter values are used while querying data from DB.
endpoint: report/{moduleName}/{report name}/_get
moduleName: It is used to define the names of module which contains current repo
Body: The Body consists of the following:
RequestInfo: Header details as used on the egov platform
tenantId: tenantId of ULB
reportName: name of the report to be used
Array of search params corresponding to each of the filled filters by the user. Each searchparam contains:-
Name: name of the filter
Input: user-selected value
Instance
Body
Write configuration as per your requirement. The structure of the config file is explained in the Configuration Details section.
Provide the absolute path of the file mentioned in Point 3 to DevOps, to add it to the file-read path of the report service. The file is added to the environment manifest file for it to be read at the start-up of the application.
Deploy the latest version of the report service app.
Add Role-Action mapping for APIs.
Use the module-name as path parameters in the URL of the requests for report service with the required request body.
The report service provides a common framework to generate reports and show the report details base on the search criteria.
Provide a common framework for generating reports
Provide functionality to create new ad-hoc reports with minimal efforts
Avoid writing code again in case of new report requirements
Makes it possible to create reports with only knowledge of SQL and JSONPath
Provides metadata about the report.
Provides the data for the report.
Reload the configuration at runtime
To integrate, a host of echallan-services modules should be overwritten in the helm chart.
The API should be mentioned in ACCESSCONTROL-ACTIONS-TEST. Refer example below.
Add Role-Action mapping for APIs.
An API Gateway provides a unified interface for a set of microservices so that clients do not need to know about all the details of microservices internals.
Digit uses Zuul as an edge service that proxies requests to multiple back-end services. It provides a unified “front door” to our ecosystem. This allows any browser, mobile app or other user interfaces to consume underlying services.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
egov-user service is running
egov-accesscontrol service is running
Provides easier API interface to clients
Can be used to prevent exposing the internal micro-services structure to the outside world.
Allows to refactor microservices without forcing the clients to refactor consuming logic
Can centralize cross-cutting concerns like security, monitoring, rate limiting etc
Zuul has mainly four types of filters that enable us to intercept the traffic in different timelines of the request processing for any particular transaction. We can add any number of filters for a particular url pattern.
pre filters – are invoked before the request is routed.
post filters – are invoked after the request has been routed.
route filters – are used to route the request.
error filters – are invoked when an error occurs while handling the request.
Microservice authentication and security
Authorization
API Routing
Open APIs using Whitelisting
RBAC filter
Logout filter for finance module
Property module tax calculation filter for fire cess
Request enrichment filter:
Addition of co-relation id
Addition of authenticated user’s userinfo to requestInfo.
Error filter:
Error response formatting
Validation Filter to check if a tenant of a particular module is enabled or not.
Multitenancy Validation Filter. Take the tenant id from Req body or Query Param and validate against additional tenant role or primary tenant role.
DevOps efficiency: API Response time logging and Send notification if it is taking more time.
Rate Throttling
Routing Property
For each service, the below-
mentioned property has to be added in routes.properties
1-zuul.routes.{serviceName}.path = /{context path of service}/** 2-zuul.routes.{serviceName}.stripPrefix = {true/false} 3-zuul.routes.{serviceName}.url = {service host name}
Rate Limiting Property
For endpoints which requires rate throttling, below mentioned property has to be added in limiter.properties
Deploy the latest version of zuul service.
Add zuul routing context paths and service hostname in the configuration.
The zuul service is used to act as an API gateway for services that citizens avail of from the ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of routing and authorizing users for accessing resources.
To integrate, a host of zuul modules should be overwritten in the helm chart.
The objective of this service is to create a common point to manage all the SMS notifications being sent out of the platform. Notification SMS service consumes SMS from the Kafka notification topic and processes them to send it to a third party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of third party API integration
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Provide a common platform to send SMS notifications to users
Support localised SMS
Easily configurable with different SMS service providers
The implementation of the consumer is present in the directory src/main/java/org/egov/web/notification/sms/service/impl
.
These are current providers available
Generic
Console
MSDG
The implementation to be used can be configured by setting sms.provider.class
.
The Console
implementation just prints the message mobile number and message to the console.
This is the default implementation, which can work with most of the SMS providers. The generic implementation supports below
GET or POST based API
Supports query params, form data, JSON Body
To configure the URL of the SMS provider use sms.provider.url
property To configure the http method used, configure the sms.provider.requestType
property to either GET
or POST
.
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded
or sms.provider.contentType=application/json
respectively
To configure which data needs to be sent to the API, the below property must be configured:
sms.config.map
={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map
={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map
={'extraParam': 'abc'}
sms.extra.config.map
is not used currently and is only kept for custom implementation which requires data that doesn't need to be directly passed to the REST API call
sms.config.map
is a map of parameters and their values
Special variables that are mapped
$username
maps to sms.provider.username
$password
maps to sms.provider.password
$senderid
maps to sms.senderid
$mobileno
maps to mobileNumber
from Kafka fetched message
$message
maps to the message
from the Kafka fetched message
$<name>
any variable that is not from the above list, is first checked in sms.category.map
and then in application.properties
and then in the environment variable with full upper case and _
replacing -
, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}
. Then the API call will be passed <url>?u=<$username>&p=password
Message Success or Failure
Message success delivery can be controlled using below properties
sms.verify.response
(default: false)
sms.print.response
(default: false)
sms.verify.responseContains
sms.success.codes
(default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true
and sms.verify.responseContains
to the text that should be contained in the response.
Blacklisting or Whitelisting numbers
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using the below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a ,
separated list of numbers or number patterns. To use patterns use X
for any digit match and *
for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX
will blacklist any phone number starting with 5
, or the exact number 9999999999
and all numbers starting from 8888888800
to 8888888899
Prefixing
Few 3rd parties require a prefix of 0
or 91
or +91
with the mobile number. In such a case you can use sms.mobile.prefix
to automatically add the prefix to the mobile number coming in the message queue.
Error Handling
There are different topics to which the service will send messages. Below is a list of the same:
In an event of a failure to send SMS, if kafka.topics.backup.sms
is specified, then the message will be pushed on to that topic.
Any SMS which expires due to Kafka lags, or some other internal issues, they will be passed to the topic configured in kafka.topics.expiry.sms
If a backup
topic is not configured, then in an event of an error the same is delivered to kafka.topics.error.sms
Following are the properties in the application.properties file in notification SMS service which are configurable.
Add the variables present in the above table in a particular environment file
Deploy the latest version of egov-notification-sms service.
Notification SMS service consumes SMS from the Kafka notification topic and processes them to send it to a third party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Provide an interface to send notification SMS on the user mobile number
Support SMS in various languages
To integrate, create the SMS request body given in the example below. Provide the correct mobile number and message in the request body and send it to the kafka topic:- egov.core.notification.sms
The notification-sms service reads from the queue and sends the sms to the mentioned phone number using one of the SMS providers configured.
Persister service provides a framework to persist data in a transactional fashion with low latency based on a config file. Removes repetitive and time-consuming persistence code from other services.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Prior Knowledge of PostgresSQL.
Prior Knowledge of JSONQuery in Postgres. (Similar to PostgresSQL with a few aggregate functions.).
Kafka server is up and running.
Persist data asynchronously using Kafka providing very low latency
Data is persisted in batch
All operations are transactional
Values in prepared statement placeholder are fetched using JsonPath
Easy reference to parent object using ‘{x}’ in jsonPath which substitutes the value of the variable x in the JsonPath with value of x for the child object.(explained in detail below in doc)
Supported data types ARRAY("ARRAY"), STRING("STRING"), INT("INT"),DOUBLE("DOUBLE"), FLOAT("FLOAT"), DATE("DATE"), LONG("LONG"),BOOLEAN("BOOLEAN"),JSONB("JSONB")
Persister uses the configuration file to persist data. The key variables are described below:
serviceName: Name of the service to which this configuration belongs.
description: Description of the service.
version: the version of the configuration.
fromTopic: The Kafka topic from which data is fetched
queryMaps: Contains the list of queries to be executed for the given data.
query: The query to be executed in form of a prepared statement:
basePath: base of json object from which data is extracted
jsonMaps: Contains the list of jsonPaths for the values in placeholders.
jsonPath: The jsonPath to fetch the variable value.
Bulk Persister
To persist large quantity of data bulk setting in persister can be used. It is mainly used when we migrate data from one system to another. The bulk persister has the following two settings:
Any Kafka topic containing data that has to be bulk persisted should have '-batch' appended at the end of topic name example: save-pt-assessment-batch.
Every incoming request [via kafka] is expected to have a version attribute set, [jsonpath, $.RequestInfo.ver] if versioning is to be applied.
If the request version is absent or invalid [not semver] in the incoming request, then a default version is defined by the following property in the application.propertiesdefault.version=1.0.0
is used.
The request version is then matched against the loaded persister configs and applied appropriately.
Write configuration as per the requirement. Refer to the example given earlier.
In the environment file, mention the file path of configuration under the variable egov.persist.yml.repo.path
while mentioning the file path we have to add file:///work-dir/
as a prefix. for example: egov.persist.yml.repo.path = file:///work-dir/configs/egov-persister/abc-persister.yml
. If there are multiple file separate them with comma (,
).
Deploy the latest version of egov-persister service and push data on Kafka topic specified in config to persist it in DB.
The persister configuration can be used by any module to store records in a particular table of the database.
Insert/Update Incoming Kafka messages to Database.
Add Modify Kafka message before putting it into the database.
Persist data asynchronously.
Data is persisted in batch.
Write configuration as per your requirement. The structure of the config file is explained above in the same document.
Check-in the config file to a remote location preferably Github.
Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-persister. The file will be added to egov-persister's environment manifest file for it to be read on start-up of the application.
Run the egov-persister app and push data on Kafka topic specified in config to persist it in DB
An eGov core application that provides locale-specific components and translation of text for the eGov group of applications.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Redis and postgres.
The localization application stores the locale data in the format of key and value along with the module, tenantid and locale. Module defines which application of eGov owns the locale data and tenantId do the same for the tenant. Locale refers to the specific location where data is being added.
The request can be posted through the post API with the above-mentioned variables in the request body.
Once posted the same data can be searched based on the module, locale and tenantId as keys.
The Data posted to the localization service is permanently stored in the database and is loaded into the Redis cache for easy access. Each time the new data is added to the application the Redis cache is refreshed.
Deploy the latest version of Localization Service.
Add Role-Action mapping for APIs.
The Localization service is used to store key-value pairs of metadata in different languages for all miscellaneous / adhoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of having multiple languages in modules.
To integrate, a host of localization-services modules should be overwritten in the helm chart.
/localization/messages/v1/_upsert
should be added as the create endpoint for creating localization key-value pairs in the system
/localization/messages/v1/_search
should be added as the search endpoint. This method handles all requests to search existing records depending on different search criteria
The objective of this service is to create a common point to manage all the email notifications being sent out of the platform. Notification email service consumes email requests from the Kafka notification topic and processes them to send it to a third party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of third party API integration
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Provide a common platform to send email notifications to the users
Support localised email.
egov-notification-mail is a consumer that listens to the egov.core.notification.email topic, reads the message and generates email using the SMTP Protocol. The services need that the senders email is configured. On the other hand, if the senders email is not configured, the service gets the email id by internally calling egov-user service to fetch email id. Once the email is generated, the content is localized by egov-localization service after which it is notified to the email id.
Deploy the latest version of the notification email service.
Make sure the consumer topic name for email service is added in deployment configs.
The email notification service is used to send out email notifications for all miscellaneous / adhoc services that citizens avail of from the ULBs.
Can perform service-specific business logic without impacting the other modules.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, the client service should send email requests to the email notification consumer topic.
Configuring Report for a module requires adding the required report configuration as per the standard format and with the minimum development time.
UI can have different types of filters such as date, dropdown etc.. and even the sum of a column can also be easily displayed in UI. Pagination and downloading the report in pdf format, xls format is already present in the report UI.
Type of Reports which can be configured :
Count of applications
Statewide collections
Application status
Cancelled receipts
Migrated records / Data entry records
Limitation of this framework is for reports having requirements with complex queries with multiple joins as the report uses the query to fetch the data from the database, It is resource-intensive and response might be slow in those scenarios.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permissions to edit the git repository to add the report configuration
User with permissions to add action and role action in the mdms
Showcase the data in the required and cleaner format.
The UI is rendered with the help of configuration in the report and there is no extra effort in building UI for different reports.
For Implementation specific report requirements, customization is easy and turn around time is less.
After adding the new report/ editing existing report configuration in the respective module, the report service needs to be restarted.
Create a reports.yml file and add report configuration as per standard format.
Add the action and role action in the mdms.
Add the github raw path of the report.yml file in the report.config file
User-OTP service handles the OTP for user registration, user login and password reset for a particular user.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
egov-user service is running
egov-localization service is running
egov-otp service is running
The user-otp service sends the OTP to the user on login requests, on password change requests and during new user registration.
Deploy the latest version of user-otp.
Make sure egov-otp is running.
Add Role-Action mapping for APIs.
User-OTP service handles the OTP for user registration, user log in and password reset for a particular user.
Can perform user registration, login, password reset.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, a host of user-otp modules should be overwritten in the helm chart.
/user-otp/v1/_send
should be added as the endpoint for sending OTP to the user via sms or email
BasePath
/user-otp/v1/[API endpoint]
a) POST /_send
This method sends the OTP to the user via SMS or email based on the below parameters:
Following are the Producer topic.
egov.core.notification.sms.otp
:- This topic is used to send OTP to the user mobile number.
org.egov.core.notification.email
:- This topic is used to send OTP to the user email id.
OTP Service is a core service that is available on the DIGIT platform. The service is used to authenticate the users on the platform. The functionality is exposed via REST API.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
egov-otp is called internally by the user-otp service which fetches the mobileNumber and feeds it to egov-otp to generate 'n' digit OTP.
Deploy the latest version of egov-otp service.
Add Role-Action mapping for APIs.
Below properties define the OTP configurations
a) egov.otp.length
: Number of digits in the OTP
b) egov.otp.ttl
: Controls the validity time frame of the OTP. The default value is 900 seconds. Another OTP generated within this time frame is also allowed.
c) egov.otp.encrypt
: Controls if the OTP is encrypted and stored in the table.
The egov-otp service is used to authenticate the user in the platform.
Can perform user authentication without impacting the other module.
In the future, this application can be used in a standalone manner in any other platform that requires a user authentication system.
To integrate, a host of egov-otp modules should be overwritten in the helm chart.
/otp/v1/_create
should be added as the create endpoint. Create OTP Configuration this API is an internal call from v1/_send endpoint. This endpoint is present in the user-otp service and hence explicit calls are not needed.
/otp/v1/_validate
should be added as the validate endpoint. OTP Configuration this endpoint validates the OTP with respect to the mobile number.
/otp/v1/_search
should be added as the search endpoint. This API searches the mobile number and OTP using the uuid. The uuid nothing but the OTP reference number.
BasePath
/egov-otp/v1
Egov-otp service APIs - contains create, validate and search endpoint
a) POST /otp/v1/_create
- create OTP Configuration this API is an internal call from v1/_send endpoint. This endpoint is present in the user-otp service and hence there is no need for any explicit calls.
b) POST /otp/v1/_validate
- validate OTP Configuration this endpoint is to validate the OTP with respect to the mobile number.
c) POST /otp/v1/_search
- search the mobile number and OTP using uuid. The uuid is the OTP reference number.
Notification service can notify the user through SMS and email for there action on DIGIT as an acknowledgement that their action has been successfully completed.
ex: actions like property create, TL create, etc.
To send SMS we need the help of 2 services, one on which the user is taking action and the second SMS service.
To send an email we need the help of 2 services, one on which the user is taking action and second email service.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Spring boot.
Prior Knowledge of Kafka.
Prior Knowledge of localization service.
For a specific action of the user, he/she will get a SMS and email as an acknowledgment.
Users can get SMS and email based on the localization language.
If you want to take action for a specific action on that action the service has to listen to a particular topic so that each time any record comes to the topic consumer will know that action has been taken and can trigger a notification for it.
ex: if you want to trigger a notification for Property create then the property service’s NotificationConsumer class should listen to topic egov.pt.assessment.create.topic so that each time any record comes to the topic egov.pt.assessment.create.topic NotificationConsumer will know that Property creates action that has been taken and can trigger a notification for it.
when any record comes into the topic first the service will fetch all the required data like user name, property id, mobile number, tenant id, etc from the record which we fetched from the topic.
Then we will fetch the message contain from localization and the service replaces the placeholders with the actual data.
Then put the record in SMS topic in which SMS service is listening.
email service is also listening to the same topic which SMS service is listening.
The Encryption Service is used to secure sensitive data that is being stored in the database.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8.
Kafka server is up and running.
Encryption Service offers the following features :
Encrypt - The service encrypts the data based on given input parameters and data to be encrypted. The encrypted data is mandatory for a string data type.
Decrypt - The decryption will happen solely based on the input data (any extra parameters are not required). The encrypted data has the identity of the key used at the time of encryption, the same key will be used for decryption.
Sign - Encryption Service can hash and sign the data which can be used as the unique identifier of the data. This can also be used for searching the given value from a datastore.
Verify - Based on the input sign and the claim, it can verify if the given sign is correct for the provided claim.
Rotate Key - Encryption Service supports changing the key used for encryption. The old key will still remain with the service which will be used to decrypt old data. All the new data will be encrypted by the new key.
Following are the properties in the application.properties file in egov-enc-service which are configurable.
Deploy the latest version of the Encryption Service.
Add Role-Action mapping for APIs.
The Encryption service is used to encrypt sensitive data that needs to be stored in the database.
Can perform encryption without having to re-write encryption logic every time in every service.
To integrate, a host of encryption services, modules should be overwritten in the helm chart.
/crypto/v1/_encrypt
should be added as an end point for encrypting input data in the system
/crypto/v1/_decrypt
should be added as the decryption endpoint.
/crypto/v1/_sign
should be added as the endpoint for providing a signature for a given value.
/crypto/v1/_verify
should be added as the endpoint for verifying whether the signature for the provided value is correct.
/crypto/v1/_rotatekey
should be added as an end point to deactivate the keys and generate new keys for a given tenant.
a) POST /crypto/v1/_encrypt
Encrypts the given input value/s OR values of the object.
b) POST /crypto/v1/_decrypt
Decrypts the given input value/s OR values of the object.
c) /crypto/v1/_sign
Provide signature for a given value.
d) POST /crypto/v1/_verify
Check if the signature is correct for the provided value.
e) POST /crypto/v1/_rotatekey
Deactivate the keys for the given tenant and generate new keys. It will deactivate both symmetric and asymmetric keys for the provided tenant.
The objective of PDF generation service is to bulk generate pdf as per requirement.
Before you proceed with the documentation, make sure the following pre-requisites are met -
All required data and format file path is added in the environment yml file
pdf-service is up and running
Provide functionality to download and print PDF’s
Provide functionality to download and print bulk PDF’s
Create data config and format config for a PDF according to product requirement.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment yml file
Deploy the latest version of pdf-service in a particular environment.
For Configuration details please refer to the Customizing PDF Receipts & Certificates document in Reference Docs
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be printed/downloaded by the user.
Functionality to generate PDFs in bulk
Avoid regeneration
Support QR codes and Images
Functionality to specify a maximum number of records to be written in one PDF
Uploading generated PDF to filestore and return filestore id for easy access
The following are the steps for integrating TL certificate in UI.
In footer.js file which is present in /frontend/web/rainmaker/dev-packages/egov-tradelicence-dev/src/ui-config/screens/specs/tradelicence/applyResource , Create two object (download and print object) in footerReview function.
Example
In tlCertificateDownloadObject give the proper label name and key for the pdf. In the link function get the object whose mapping is required for PDF, in this case, we want a license object. Call the function downloadCertificateForm (details about this function is described in the next step). Add icon details which we want to use in UI to represent that option. The same thing for tlcertificatePrintObject only difference is we have to call generateReceipt function. Again create the same two object with similar content in downloadPrintContainer function.
Mention the function name “downloadCertificateForm“ and “generateReceipt“ in import , because the functions is define in /frontend/web/rainmaker/dev-packages/egov-tradelicence-dev/src/ui-config/screens/specs/utils/index.js and /frontend/web/rainmaker/dev-packages/egov-tradelicence-dev/src/ui-config/screens/specs/utils/receiptPDF.js
In index.js define the function which is responsible for calling the Create API of PDF service to create respective PDF. In that function, you have to mention the tenant ID and proper key value which is the same as the key mentioned in the data and format config. Also mentioned the URL : /pdf-service/v1/_create and action as get and also call the function downloadReceiptFromFilestoreID which is responsible to call filestore service with filestoreid and return the URL for pdf.
Example of function downloadCertificateForm
Example of function generateReceipt
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
__
This section walks you through the steps to adding a new language or setting up the default language on the DIGIT system.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This section provides a step by step guide to setting up workflows and configuring the workflows for DIGIT entities.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Always define the Yaml for your APIs as the first thing using Open API 3 Standard ()
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in the POST body only
If query params for search need to be supported then make sure to have the same parameters in POST body also and POST body should take priority over query params
Provide additional Details objects for _create and _update APIs so that the custom requirements can use these fields
Each API should have a object in request body at the top level
Each API should have a object in response body at the top level
Mandatory fields should be minimum for the APIs.
minLength and maxLength should be defined for each attribute
Read-only fields should be called out
Use common models already available in the platform in your APIs. Ex -
(Citizen or Employee or Owner)
(Response sent in case of errors)
TODO: Add all the models here
For receiving files in an API, don’t use binary file data. Instead, accept the file store ids
If there is only one file to be uploaded and no persistence is needed, and no additional json data is to be posted, you can consider using direct file upload instead of using filestore id
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following these principles.
This section contains docs that walk you through the various steps required to configure DIGIT services.
All content on this page by is licensed under a .
Through SMS and Emails necessary information/updates are communicated to the users on their various transactions on DIGIT applications. For example, when a Trade License application is initiated or forwarded or approved or payment is done in DIGIT system, the applicant and payer (if the payer is other than the applicant) will be informed about the status of Trade License application through SMS/Email. The language for SMS and Email can be set as per requirement/choice.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
User can receive Emails and SMS of necessary information/updates in the decided language.
The language can be decided by the end-users (either Citizen or Employee). End-users can select the language before logging in or after logging, from inbox page.
If the language is not chosen by end-user, then SMS/Email is received in the language of, State requirement based state-level configured language.
Sms and Email localization should be pushed to the database through the endpoints for all the languages added in the system. Localization format for SMS/Email
Sample of SMS localisation for Trade License application initiation English localization
Hindi localization
The placeholder <1>,<2>,<3> will the replaced by the actual required value which gives important information to the applicant. For example**,** the message will be received by the applicant as: Dear Kamal, Your Trade License application number for Ramjhula Provisional Store has been generated. Your application no. is UK-TL-2020-07-10-002058 You can use this application number….
The default language for SMS and Email can be set by
Clicking on the preferred language from the available language button, in language selection page, which opens before the login page.
In Citizen or Employee inbox page, the language can be selected from the drop-down, which can be seen in the right corner of the inbox title bar.
If the language is not chosen by Citizen or Employee, then SMS/Email is received in default configured language. For example in a State if Hindi, English, Kannada are added as three languages in the system and out of these three languages if State decides that Kannada should be configured as default language then Kannada is set as the default language in MDMS. So when end-user does not choose any language then SMS/Email is sent in Kannada language.
The selected language key is sent as a parameter along with other required transaction parameters to the back end code.
In the back end, to send SMS/Email logic, language key is checked and based on the language key and SMS unique key, the message is fetched from the database.
The objective of this functionality is to provide a mechanism to trigger action on applications that satisfy certain predefined criteria. Looking at sample use cases provided by the product team, the majority of use cases can be summarised as performing action ‘X’ on applications in state ‘Y’ and have exceeded the state SLA by ‘Z’ days. We can write one query builder which takes this state ‘Y’ and SLA exceeded by ‘Z’ as search params and then we can perform action X on the search response. This has been achieved by defining an MDMS config like below:
In the above configuration, we define the condition for triggering the escalation of applications. The above configuration triggers escalation for applications in RESOLVED
state which have exceeded stateSLA by more than 1.0
day and it will trigger the escalation by performing CLOSERESOLVEDCOMPLAIN
on the applications. Once the applications are escalated the processInstances are pushed on the pgr-auto-escalation
topic. We have done a sample implementation for pgr-services, where we have updated persister configuration to listen on this topic and update the complaint status accordingly.
The auto-escalation for businessService PGR
will be triggered when the following API is called:
Note that the businessService is a path param. (For example, if the escalation has to be done for tl-services NewTL workflow the URL will be 'http://egov-workflow-v2.egov:8080/egov-workflow-v2/egov-wf/auto/NewTL/_escalate
').
These APIs have to be configured in cron job config so that they can be triggered periodically according to the requirements. Only users with role AUTO_ESCALATE
can trigger auto-escalation so first create users with statelevel AUTO_ESCALATE
roles first and then add that user in the userInfo of the requestInfo. This step has to be done because cron job does internal API calls and zuul won’t enrich the userInfo.
For setting up autoescalation trigger the workflow also needs to be updated. For example to add auto escalate trigger on RESOLVED
state with action CLOSERESOLVEDCOMPLAIN
in PGR
businessService, we will have to search the businessService add the following action in actions array of RESOLVED
state and call update API
Suppose an application gets auto-escalated from state ‘X' to state 'Y’, employees can look at these escalated applications through the escalate search API. The following sample cURL can be used to search auto-escalated applications of PGR module belonging to Amritsar tenant -
Workflow is defined as a sequence of tasks that has to be performed on an application/Entity to process it. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration. We will discuss how to create this configuration for a new product in this document.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2 service is up and running
Role-Action mapping are added for business Service API’s
Create and modify workflow configuration according to the product requirements
Configure State level as well BusinessService level SLA to efficiently track the progress of the application
Control access to perform actions through configuration
Deploy the latest version of egov-workflow-v2 service
Add businessService persister yaml path in persister configuration
Add Role-Action mapping for BusinessService API’s
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
The Workflow configuration has 3 levels of hierarchy: a. BusinessService b. State c. Action The top-level object is BusinessService, it contains fields describing the workflow and list of States that are part of the workflow. The businessService can be defined at tenant level like pb.amritsar or at the state level like pb. All objects maintain an audit sub-object which keeps track of who is creating and updating and the time of it.
Each State object is a valid status for the application. The State object contains the information of the state and what actions can be performed on it.
The action object is the last object in the hierarchy, it defines the name of the action and the roles that can perform the action.
The workflow should always start from the null state as the service treats new applications as having null as the initial state. eg:
In action object whatever nextState is defined, the application will be sent to that state. It can be to another forward state or even some backward state from where the application has already passed (generally, such actions are named SENDBACK)
SENDBACKTOCITIZEN is a special keyword for action name. This action sends back the application to the citizen’s inbox for him to take action. A new State should be created on which Citizen can take action and should be the nextState of this action. While calling this action from module assignees should be enriched by the module with the uuids of the owners of the application
(Note: All the API’s are in the same postman collection therefore same link is added in each row)
__
Roles define the permissions of a user to perform a group of tasks. The tasks are created as API calls to do certain actions when a request for those calls is sent by the system. Access permission is grated by mapping roles with API. User assigned with the roles to provide access for the API
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
Knowledge on how to create a new API.
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following those principles
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in POST body only
For further more information about how new API is developed could be referred in this link
Adding New APIs(actions) and Mapping Roles with that APIs provides permission to perform certain task can be restricted based on the requirement.
After mapping Roles with APIs, the MDMS service needs to be restarted to read the newly added data.
APIs are added in actions-test.json and called as action. In MDMS, file actions-test.json, under ACCESSCONTROL-ACTIONS-TEST folder APIs are added.
API Sample -
APIs are added as action array element with the request url and other required details for the array "actions-test"
Each action is defined as a key-value pair:
Mapping of Roles and APIs/action is added in roleactions.json, under the folder ACCESSCONTROL-ROLEACTIONS. Sample mapping:
Role and API/action mapping is added as an array element under array roleactions. Each mapping is defined with key-value pairs. keys are rolecode, actionid, actioncode and tenantId.
In this document, we are coming to learn how to upload apk to play store and available to the end user to download and use from play store.
Before starting the process of upload the apk to play store the following requirements are a must.
Make sure that signed apk (signed apk has key generated, which is used to release different versions of the apk ) is generated for the application that you want to upload to play store.
Make sure that you have account for google play console by agreeing to terms and conditions, also payment shd have been done for the account and is ready for uploading an apk to play store.
two screenshots of your app and they must be at least 320 pixels wide and be in a PNG or JPEG format.
You must also add your high resolution app icon. It must be 512 by 512 pixels and it must be in 32-bit PNG format. This icon will be visible on the Google Play app’s page and in search results.
Next, a Feature Graphic image, which will be visible at the top of the Google Play app’s page. This image must be 1024 by 500 pixels, and maybe in JPEG or 24-bit PNG format.
Also, prepare a small description about app in four to five lines.
The use of deploying the apk to the play store is to enable the user to download the apk from the play store and use whenever needed. By uploading the apk to play store our app will be available to all end-users around the world just on the fingertips.
Now, we are going to learn step by step procedure of uploading apk to play store.
Open google play console by entering the url () and log in with the user credentials.
After login in the following screen can be seen.
Now on the top-right click on create Application button and you get a popup to enter the title of the apk. Refer the screenshot below and click on create.
Under product details section, enter the description that we have prepared in the beginning.
Under the assets section, we need to attach at-least two screenshots of the application, high-resolution thumbnail icon and Feature Graphic image.
Under Categorization, select the application type, category
Coming to the contact section, add website URL, email and also phone number if you wish to add one.
After next comes the privacy policy section where you can enter the link of the privacy and policy page and save it as a draft.
After saving as a draft on the “right side menu” select the option “App Release”. In the app release page under the “production Track” click on the manage, then click on create a release in the next screen. Then in the next page under the “ App signing by google play” click the continue.
In the next page, under android app bundles and APK to add section add your APK generated also enter the release name and add the description inside the <en-US> tag related to that APK and save the entered data.
After that click on the “calculate Rating” then in the click “apply Rating Button”.
Next will be the “Pricing and Distribution” in the “Right side Menu”. In this page we have the option to select the cost of APK to download or free, also select the countries app needs to be available and also answer the questionnaires that are asked and click on the save Drafts.
Finally, again go the “App Release” in the right side menu and click on the “ Edit Release” button in the Production Track section and save the details and at the end click “Start role out to production” and click confirm.
The following screen acknowledges your process is ended.
That is all about uploading APK to the play store. You can check the status of the application on the right side menu under the “ All Application”. It takes some hours to appear in the play store. Wait for the APK to appear in the play store. We can also check the details of the APK in the Dashboard.
Roles define the permissions of a user to perform a group of tasks. For example for a Trade License application initiate, forward, approve or payment are tasks which require permission. User assigned with role Citizen or Counter Employee can perform initiation and payment. TL Document Verifier can forward the application and the only user assigned with the role named TLApprover can approve the application.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
With Roles, permission to perform a certain task can be restricted based on the requirement. For example, only user with Role TLApprover can approve the Trade License initiated application.
While creating an employee in the system from HRMS Admin, the roles can be assigned to the employees based on the requirement. The roles added in mdms will show for “roles drop down” in employee create screen.
In digit system workflow for a module can be implemented based on roles. For example for Trade License module a Trade License application workflows as per the role is like: CounterEmployee/Citizen>TLDocVerifier>TLApprover>CounterEmployee/Citizen Trade License application workflow based on roles:
After adding the new role, the MDMS service needs to be restarted to read the newly added data.
Roles are added in roles.json In MDMS, file roles.json, under ACCESSCONTROL-ROLES folder roles are added. Sample roles:
A role is added as an array element under the array named “roles”.
Each role is defined with three key-value pairs. keys are “code”, ”name” and “description”.
Localization needs to be pushed for all the roles added in roles.json
Sample Localization for roles In English:
In Hindi:
code "code": "ACCESSCONTROL_ROLES_ROLES_TL_CEMP", is the localization key for role. The key has three parts: a) ACCESSCONTROL_ROLES : It is folder and module name of MDMS, file roles.json in which roles are added. Hypen (- ) in name "ACCESSCONTROL-ROLES" is replaced with underscore ( _ ). b) ROLES : It is the role.json file name and array name under which roles as array elements are added. c)TL_CEMP : It is the unique role code.
If localization is not pushed for the roles then the key will appear in UI.
Our telemetry service is built upon sunbird telemetry sdk. mSeva’s frontend React app pushes the telemetry JS file along with every response that it sends. Now, whenever a user interacts with any of the mSeva pages in any way, for e.g. entering values in a text box or when the window is loaded etc., the event gets triggered and is recorded.
Telemetry API collection can be found here -
The telemetry payload consists of an events array. In DIGIT, only 3 event types are used namely START, SUMMARY and END event types. START signifies that events have now started getting collected for a particular page. SUMMARY signifies a collection of all the data that is required to get collected for that particular page for e.g. time spent by the user on that page, times at which the user came into the page, left the page for another tab etc. are all recorded as part of the SUMMARY event. END event signifies the end of collecting events. All these events keep getting collected and are bundled and sent when either the URL changes or the END event occurs.
Now, this event data is captured and pushed onto a Kafka topic and goes into the processing pipeline where we make a topic to topic transfer of the data. So, the format for events payload is checked, then it is pushed to another topic for de-duplicating. Similarly, the messages are unbundled and enriched via topic to topic transfer i.e. pick from one topic and push to another. In this case, there are two sinks, namely the Amazon S3 bucket and ES bucket. To perform this topic to topic transfer of data across the various components of the processing pipeline, Kafka streams (KStreams) are used which are nothing but a consumer and producer coupled together. To push data to S3 bucket, secor service is being used which is a service developed by Pinterest to pick up JSON data and to push it onto configured S3 buckets. Secor does not always create a new JSON file for any new data that it gets. There are two triggers for it, namely, reaching a particular threshold size or reaching a particular time threshold. To push data to the ES sink, Kafka connect is being used. Now, instead of making single API calls every time a message is received, the messages are again combined and persisted onto the ES index via bulk insert.
According to TRAI’s regulation on unsolicited commercial communication, all telecoms must verify every SMS content before delivering it (Template scrubbing). For this, all the businesses using SMS need to register Entities, SenderIDs, SMS templates in a centralised DLT portal. Below are the steps to register the SMS template in a centralised DLT portal and to add the template in the SMS country portal (Service provider).
Step 1: Visit the Airtel DLT portal( ) and select your area of operation as Enterprise then click on next.
Step 2: Login into the portal by entering the proper credentials and OTP.
Please contact the HR manager for the credentials and the OTP.
Step 3: Now select the Template from option and then click on content templates.
Now click on Add button to go to the next section.
Step 4: Select the option mentioned in the image below.
Note: a) For placeholder text (dynamic text in message) mention {#var#} in the message. Each {#var#} can contains 0-30 character. If dynamic text is supposed to go more than 30 characters in length, then two {#var#} have to mention side by side. Now the dynamic text can be up to 60 characters. Example:- Hi Citizen, Click on this link to pay the bill {#var#}{#var#} EGOVS b) It is mandatory to mention EGOVS at the end of every message. c) Select Template message Type as Regional if the message is in another language rather than English.
After clicking on the Save button, the template is added to the portal, Now wait for the approval of the template. Once the template gets approved save the template id and the message.
Step 5: Repeat process from Step 3 to Step 4 to register template in DLT portal.
Note: The below steps are to add approved templates in the SMS Country web portal. These steps might be different for other service providers but the data required for any service provider would be the same.
Please contact the HR manager for the credentials.
Step 7: Select option Features, then click on Manage button under Template section.
Then click on Add DLT Template button.
Step 8: Mention the template id and message of the approved template which we have saved earlier in step 4. And select sender id as EGOVFS.
After adding all the above details click on Add Template button. Now the DLT approved template get added into SMS Country portal and it is ready to use.
Select ISLanguage check box if the message is in any other language other than English.
Step 9: Repeat process from Step 7 to Step 8 to add approved template in SMS Country portal.
The objective of egov-searcher service is listed below.
To provide a one-stop framework for searching data from multiple data-source based on configuration (Postgres, Elasticsearch etc).
To create provision for implementing features based on ad-hoc requirements which directly or indirectly require a search functionality.
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Prior Knowledge of PostgresSQL.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior Knowledge of JSONQuery in Postgres. (Similar to PostgresSQL with a few aggregate functions.)
Setup
Step 1: Write configuration as per your requirement. The structure of the config file is explained later in the same doc.
Step 2: Check-in the config file to a remote location preferably github, currently we check the files into this folder - for DEV and QA and this folder - for UAT.
Step 3: Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-searcher. The file will be added to egov-searcher's environment manifest file for it to be read at the start-up of the application.
Step 4: Run the egov-searcher app, use the module name and definition name parameters from the configuration as path parameters in the URL of the search API to fetch the required data.
Definitions
Config file - A YAML (xyz.yml) file which contains configuration for search requirements.
API - A REST endpoint to fetch data based on the configuration.
Functionality
Uses Postgres JSONQuery instead of SQL Queries to fetch data from the Postgres DB.
JSONQuery:
JSONQuery is one of the exclusive features of Postgres, It provides a way of fetching data from the DB as JSON instead of ResultSet format. This saves the time spent is mapping ResultSet into the required JSON format on the functionality side.
JSONQueries are similar to SQL queries with certain functions to internally map the ResultSet to JSON. SQL queries (SELECT queries to be precise) are passed as parameters to these functions, the SQL Query returns the ResultSet which is transformed to the JSON by these functions.
Some of the functions extensively used are:
1) row_to_json: This function takes a query as a parameter and converts the result into JSON. However, the query must return only one row in the response. Note that, JSONQuery functions operate on aliases, So, the query must be mapped to an alias and the alias is passed to the function as a parameter.
Eg:
{"name": "egov", "age": "20"
}
2) array_agg: This function takes the output of row_to_json and aggregates it into an array of JSON. This is required when the query is returning multiple rows in the response. The query will be passed to row_to_json through an alias, this is further wrapped within array_agg to ensure all the rows returned by the query as converted to a JSONArray.
Eg:
[{"name": "egov", "age": "20"},{"name": "egov", "age": "20"},{"name": "egov", "age": "20"}]
3) array_to_json: This transforms the result of array_agg into a single JSON and returns it. This way, the response of a JSONQuery will always be a single JSON with the JSONArray of results attached to a key. This function is more for the final transformation of the result. The result so obtained can be easily cast to any other format or operated on using the PGObject instance exposed by Postgres.
Eg:
{"alias": [{"name": "egov", "age": "20"},{"name": "egov", "age": "20"},{"name": "egov", "age": "20"}]}
Provides an easy way to set up search APIs on the fly just by adding configurations without any coding effort.
Provides flexibility to build where clause as per requirement, with config keys for operators, conditional blocks and other query clauses.
Designed to use a specific URI for every search request thereby making it easy for role-based access control.
Fetches data in the form of JSON the format of which can be configured. This saves considerable effort in writing row mappers for every search result.
Add configs for different modules required for Searcher Service.
Deploy the latest version of Searcher Service.
Add Role-Action mapping for APIs.
The searcher service is used to search for data present in databases by running PSQL queries in the background.
Can perform service-specific business logic without impacting the other module.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, a host of searcher-service modules should be overwritten in the helm chart.
searcher/{moduleName}/{searchName}/_get
should be added as the search endpoint for the config added.
URI: The format of the search API to be used to fetch data using egov-searcher is as follows: /egov-searcher/{moduleName}/{searchName}/_get
Every search call is identified by a combination of moduleName and searchName. Here, 'moduleName' is the name of the module as mentioned in the configuration file and 'searchName' is the name of the definition within the same module that needs to be used for our search requirement.
For instance, If I want to search all complaints of PGR I will use the URI -
/egov-searcher/rainmaker-pgr-V2/serviceSearchWithDetails/_get
Body: The Body consists of 2 parts: RequestInfo and searchCriteria. searchCriteria is where the search params are provided as key-value pairs. The keys given here are the ones to be mentioned in the 'jsonPath' configuration within the 'searchParams' key of the config file.
For instance, If I want to search complaints of PGR where serviceRequestId is 'ABC1234' and tenantId is 'pb.amritsar' the API body will be:
"RequestInfo":{"apiId":"emp","ver":"1.0","ts":1234,"action":"create","did":"1","key":"abcdkey","msgId":"20170310130900","authToken":"57e2c455-934b-45f6-b85d-413fe0950870","correlationId":"fdc1523d-9d9c-4b89-b1c0-6a58345ab26d"},"searchCriteria":{"serviceRequestId":"ABC1234","tenantId":"pb.amritsar"}}
Configuration of the notification messages for a business service based on the channel for the same event.
For a specific action of the user, they get an SMS and email as an acknowledgement.
Users can get SMS, Event, and email-based on different channels.
The application allows one to either send different messages across all channels based on their actions.
To have this functionality for different business services, a channel names file was created and added to the MDMS data.
It contains information about the combination of different actions and channels for a particular business service. Example -
The Different channels are
SMS: ID (Mobile Number)
Event
Email: ID (Email ID)
This feature enabled the functionality which would first check for the channels present in the file and send the notification accordingly.
For SMS event, it would send the SMS notification and log “Sending SMS Notification”, for Event it would log, “Event Notification Sent”, and for Email, it would log, “Email Notification Sent”.
To add/delete any particular channel for a business service -
Restart egov-mdms-service to apply the changes.
Configure your business service with the steps mentioned below to configure the business service.
For any record that comes into the topic first, the service should fetch all the required data like user name, property id, mobile number, tenant id, etc from the record that is fetched from the topic.
Fetch the message content from localization and the service replaces the placeholders with the actual data.
Place the record in whichever channel’s topic that the SMS/email service is listening to.
The main reason to Setup Base Product Localization is because Digit system supports multiple languages. By setting-up Localization, we can have multiple language support to the UI. So, that user can easily understand the Digit Operations
Before you proceed with the configuration, make sure the following pre-requisites are met -
Before Starting the Localization setup one should have knowledge on React and eGov FrameWork.
Before setting-up Localization, make sure that all the Keys are pushed to the Create API and also get prepared with the Values that need to be added to the Localization key specific to particular languages that are being added in the product.
Make sure where to add the Localization in the Code.
Once the Localization is done, the user can view the Digit Screens in their own language to complete the whole application process easier as digit gives the user to select the language of their choice.
Once The key is added to the code as per requirement, Deployment can be done in the same way, how the code is being deployed.
Select a label which is needed to be localized from the Product code. Here is the example code for a header before setting-up Localization.
As we see the above which supports only the English language, To setup Localization to that header we need to the code in the following manner.
we can see below code is added when we compare with code before Localization setup.
{
labelName: "Trade Unit ",
labelKey: "TL_NEW_TRADE_DETAILS_TRADE_UNIT_HEADER"
},
Here the Values to the Key can be added by two methods either by using the localization Screen which is Developed Recently or by updating the values to the keys to create API using the postman application.
Document uploader is used by ULB employees to upload documents that will then be visible to the citizens. In an effort to increase the engagement of citizens with mSeva platform, mSeva is providing this service to enable the citizens to view important documents related to their ULB such as acts, circulars, citizen charters etc.
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Prior Knowledge of PostgresSQL.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior Knowledge of JSONQuery in Postgres. (Similar to PostgresSQL with a few aggregate functions.)
Employees can perform all four operations i.e. creating, searching, updating and deleting the documents whereas the citizens can only search for the created documents. For creating documents in a particular ULB, the document category that needs to be provided in the create API cURL has to be present in the document category MDMS file for the tenantId for which the document is getting uploaded.
A sample MDMS document category configuration file can be viewed here -
In this MDMS configuration file, ULB keys can be added and the allowed category types can be added in categoryList key.
Once a document is created in any ULB, the following attributes can be updated for that document -
ULB
Document name
Document category
Links
Attachments
Upon deleting any document, that document is soft-deleted from the records i.e. that document’s active field is set to false.
/egov-document-uploader/egov-du/document/_create - Takes RequestInfo and DocumentEntity in request body. Document entity has all the parameters related to the document being inserted.
/egov-document-uploader/egov-du/document/_update - Allows editing of attributes related to an already existing document. Searches document based on its uuid and updates attributes.
/egov-document-uploader/egov-du/document/_search - Allows searching existing documents in the database. Takes search parameters in the url and RequestInfo in request body.
/egov-document-uploader/egov-du/document/_delete - Soft deletes an existing document from the database i.e. it makes the document inactive. It takes the DocumentEntity that needs to be deleted in the request body along with RequestInfo object.
Detailed API payloads for interacting with document service for all the four endpoints can be found in the following collection -
Link to the swagger documentation can be found below -
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
vizArray is to hold multiple visualizations
All content on this page by is licensed under a .
To add a new report first add the file path in the reportFileLocationsv1[ (In this file, the path of the report configuration files get stored).
Once file path is added in the file reportFileLocationsv1, go to the folder /configs/reports/config [ Create a new file and name the file what you have given in file reportFileLocationsv1.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Micro Service which runs as a pipeline and manages to validate, transform and enrich the incoming data and pushes the same to ElasticSearch Index. Ingest service will fetch the data from the index(
paymentsindex-v1) which is specified in the indexing service API as below. The ingest service will read the configuration files which are there with v1. All the configuration files will be there .
All content on this page by is licensed under a .
"idname": ""
"idname": ""
Roles are added in roles.json In MDMS, file roles.json, under ACCESSCONTROL-ROLES folder roles are added. More about roles can be checked in the below link:
All content on this page by is licensed under a .
For the API information please refer to the swagger yaml GOTO : and click on file -> import url Then add the raw url of the API doc in the pop up ---> add url here
In case the url is unavailable, please go to the of egov-services git repo and find the yaml for egov-filestore.
minio.url=.backbone:9000(Minio server end point)
minio.url=
All content on this page by is licensed under a .
Pattern: This field will be used only when ‘type’ is set to ‘singlevaluelist’. It is the external service URL combined with JSON Paths separated by ‘|’. The first JSON path is for codes and the second for values. Values will be shown to the user in drop down. And codes corresponding to user selected value will be sent to the report service and will be used in searchClauses mentioned in the last point. Ex:-
apiURL:
URL:
URL:
Check-in the config file to a remote location preferably Github. Currently, the files are checked into this folder - for dev and QA environment.
Add module name and corresponding report path in the same format as used in
All content on this page by is licensed under a .
This service is a consumer, which means it reads from the Kafka queue and does not provide the facility to be accessed through API calls. There is no REST layer here. The producers willing to integrate with this consumer post a JSON onto the topic configured at ‘’. The notification-sms service reads from the queue and sends the SMS to the mentioned phone number using one of the SMS providers configured.
All content on this page by is licensed under a .
Each persister config has a version attribute which signifies the service version, this version can contain custom DSL; defined here,
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
For integration-related steps please refer to the document .
All content on this page by is licensed under a .
Roles are added in roles.json In MDMS, file roles.json, under ACCESSCONTROL-ROLES folder roles are added. More about roles can be checked in the below link:
All content on this page by is licensed under a .
After clicking the create you will be redirected to the page where we need to enter the product details, graphics assets, Categorization ..etc.
Now in the right side menu to the “content rating” and click on continue button which will redirect to “Welcome to the Content Rating Rating Questionnaire page”, where we need to enter the email id and also select your app category in provided categories and fill all the Questionnaire in the form that comes after selecting the app category and click on the “save Questionnaire”, you will receive an email after clicking on the “save Questionnaire”.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Step 6: Now, login into SMS Country portal ( ) by entering proper credentials.
All content on this page by is licensed under a .
For more details about JSONQuery, please check:
All content on this page by is licensed under a .
Update channelNames array in file and add/delete the channel you want a business service’s action to have.
Add the details about the particular action and the channels you want that action to trigger in the file in egov-mdms-data repository.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Title
Link
DSS Backend Configuration Manual
DSS Dashboard - Technical Document for UI
DSS Technical Documentation
Attribute Name
Description
tenantId
The tenantId (ULB code) for which the workflow configuration is defined
businessService
The name of the workflow
business
The name of the module which uses this workflow configuration
businessServiceSla
The overall SLA to process the application (in milliseconds)
state
Name of the state
applicationStatus
Status of the application when in the given state
docUploadRequired
Boolean flag representing if document are required to enter the state
isStartState
Boolean flag representing if the state can be used as starting state in workflow
isTerminateState
Boolean flag representing if the state is the leaf node or end state in the workflow configuration. (No Actions can be taken on states with this flag as true)
isStateUpdatable
Boolean flag representing whether data can be updated in the application when taking action on the state
currentState
The current state on which action can be performed
nextState
The resultant state after action is performed
roles
A list containing the roles which can perform the actions
auditDetails
Contains fields to audit edits on the data. (createdTime, createdBy,lastModifiedTIme,lastModifiedby)
Title
Link
Workflow Service Documentation
Setting Up Workflows
Link
_create
_update
_search
1
id
Numeric
Yes
A unique id that identifies action.
2
name
Text
No
A short narration provided to the action.
3
url
Text
Yes
It is the endpoint of API or type like url or card.
4
displayName
Text
No
It is the display name.
5
orderNumber
Numeric
Yes
A number to represent order to display in UI
6
parentModule
Text
No
Code of the service referred to as parent
7
enabled
boolean
Yes
To enable or disable display in UI.
8
serviceCode
Text
No
Code of the service to which API belongs.
9
code
Text
No
10
path
Text
No
11
navigationUrl
Text
Yes
Url to navigate in UI
12
leftIcon
Icon
No
13
rightIcon
Icon
No
1
rolecode
Yes
The unique code of the role which is defined in roles.json and which required mapping for API.
2
actionid
Yes
The unique id of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
3
actioncode
No
The code of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
4
tenantid
Yes
tenant id of state.
Title
Link
Sample actions-test.json
Sample roles.json
Sample roleactions.json Roles APIs mapping
Title
Link
DSS Backend Configuration Manual
DSS Dashboard - Technical Document for UI
DSS Technical Documentation
1
CounterEmployee
Initiates the TL application for Citizen from counter. Initiated TL application goes to TLDocVerifier inbox.
2
Citizen
Initiates the TL application. Initiated TL application goes to TLDocVerifier inbox.
3
TLDocVerifier
User with role TLDocVerifier can forward or reject the TL application after verifying the initiated application. The rejected application shows for re-submission in initiator inbox. The forwarded application goes to TLApprover inbox.
4
TLApprover
TLApprover can approve or reject based on the requirement. The rejected application goes back to TLDocVerifer for re-verification. The approved application shows for payment pending in initiator inbox.
5
CounterEmployee
Once the initiated application is approved by the user with role TLApprover, CounterEmployee can do the payment and download the receipt.
6
Citizen
Once the initiated application is approved by the user with role TLApprover, Citizen can do the payment and download the receipt.
1
code
Alphanumeric
64
Yes
A unique code that identifies the user role name.
2
name
Text
256
Yes
The Name indicates the User Role while creating an employee a role can be assigned to an individual employee.
3
description
Text
256
No
A short narration provided to the user role name.
Title
Link
Sample roles.json
Reference link
1
id
Numeric
Yes
A unique id that identifies an action.
2
name
Text
No
A short narration provided to the action.
3
url
Text
Yes
It is the request URL of API call.
4
displayName
Text
No
It is the display name.
5
enabled
boolean
Yes
To enable or disable display in UI.
6
servicecode
Text
No
Code of the service to which API belongs.
1
rolecode
Yes
The unique code of the role which is defined in roles.json and which required mapping for API.
2
actionid
Yes
The unique id of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
3
actioncode
No
The code of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
4
tenantid
Yes
tenant id of state.
Title
Link
Sample actions-test.json
Sample roles.json
Sample roleactions.json Roles APIs mapping
Title
Link
API Swagger Documentation
Local Setup
Property
Value
Remarks
egov.core.notification.sms
It is the topic name to which the notification sms consumer would subscribe. Any module wanting to integrate with this consumer should post data to this topic only.
sms.provider.class
Generic
This property decides which SMS provider is to be used by the service to send messages. Currently, Console, MSDG and Generic have been implemented.
sms.provider.contentType
application/x-www-form-urlencoded
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively
sms.provider.requestType
POST
Property to configure the http method used to call provider
sms.provider.url
URL of the provider. This will be given by the SMS provider only.
sms.provider.username
egovsms
Username as provided by the provider which is passed during the API call to the provider.
sms.provider.password
abc123
Password as provided by the provider which is passed during the API call to the provider. This has to be encrypted and stored
sms.senderid
EGOV
SMS sender id provided by the provider, this will show up as the sender on receiver’s phone.
sms.config.map
{'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
Map of parameters to be passed to the API provider. This is provider-specific. $username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in environment variable with full upper case and _ replacing -, space or
sms.category.map
{'mtype': {'*': 'abc', 'OTP': 'def'}}
replace any value in sms.config.map
sms.blacklist.numbers
5*,9999999999,88888888XX
For blacklisting, a “,” separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match
sms.whitelist.numbers
5*,9999999999,88888888XX
For whitelisting, a “,” separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match
sms.mobile.prefix
91
add the prefix to the mobile number coming in the message queue
Title
Link
SMS Template Approval Process
Variable Name
Default Value
Description
persister.bulk.enabled
false
Switch to turn on or off the bulk kafka consumer
persister.batch.size
100
The batch size for bulk update
Title
Link
API Postman Collection
Input Field
Description
Mandatory
Data Type
Input Field
Description
Mandatory
Data Type
tenantId
Unique id for a tenant.
Yes
String
mobileNumber
Mobile number of the user
Yes
String
type
OTP type ex: login/register/password reset
Yes
String
userType
Type of user ex: Citizen/Employee
No
String
Title
Link
API Swagger Documentation
Title
Link
NotificationConsumer
Title
Link
Adding New Language to Digit System. You can refer the link provided for how languages are added in DIGIT
Property
Default Value
Remarks
master-password
asd@#$@$!132123
Master password for encryption/ decryption.
master.salt
qweasdzx
A salt is random data that is used as an additional input to a one-way function that hashes data, a password or passphrase.
master.initialvector
qweasdzxqwea
An initialization vector is a fixed-size input to a cryptographic primitive.
size.key.symmetric
256
Default size of Symmetric key.
size.key.asymmetric
1024
Default size of Asymmetric key.
size.initialvector
12
Default size of Initial vector.
Title
Link
API Swagger Documentation
Title
Link
PDF Generation service technical documentation
Customizing PDF Receipts & Certificates
API Swagger Documentation
Link
pdf-service/v1/_create
pdf-service/v1/_createnosave
pdf-service/v1/_search
Details coming soon...
Title
Link
report config folder
Title
Link
Sample report.yml file
Sample report.config file
The objective of PDF generation service is to bulk generate pdf as per requirement. This document contains details about how to create the config files which are required to generate new pdf.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of JavaScript.
Prior knowledge of Node.js platform.
JSONPath for filtering required data from json objects.
Provide flexibility to customise the PDF as per the requirement.
Supports localisation.
Provide functionality to add an image, Qr Code in PDF.
Provide functionality to call external service for creating PDF with external service response
Create data config and format config for a PDF according to product requirement.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment yml file
Deploy the latest version of pdf-service in a particular environment.
Config file: A json config file which contains the configuration for pdf requirement. For any pdf requirements, we have to add two configs file to the service.
PDF generation service read these such files at start-up to support PDF generation for all configured module.
Attribute
Description
key
The key for the pdf, it is used as a path parameter in URL to identify for which PDF has to generate.
baseKeyPath
The json path for the array object that we need to process.
entityIdPath
The json path for the unique field which is stored in DB. And that unique ****field value is mapped to file-store id, so we can directly search the pdf which was created earlier with the unique field value and there will be no need to create PDF again.
Direct Mapping
In direct mapping, we define the variable whose value can be fetched from the array object which we extracted using baseKeyPath.
ExternalApi Mapping
The externalApi mapping is used only if there is a need for values from other service response. In externalApi mapping, API endpoint has to be set properly with the correct query parameter.
Derived mapping
In derived mapping, the estimation of variable characterized here is equivalent to the esteem acquired from the arithmetic operation between the direct mapping variable and externalApi mapping variable.
Qr code Config mapping
This mapping is used to draw QR codes in the PDFs. The text to be shown after scan can be a combination of static text and variables from direct and externalApi mappings.
Sample structure of variable definition in data config
Example to show date in PDF
If the format field is not specified in date variable declaration then in PDF date is shown with the default format of DD/MM/YYYY. For more details refer this page Unix-Timestamp
Example of external API calling to MDMS service
Example of adding Qr Code
For adding Qr code there is separate mapping with the name “qrcodeConfig“ in data config. This mapping can use variables defined in “direct” and “external“ mappings along with the ****static text. The information on the QR Code scan will be defined as value. The variable defined in this mapping can directly be used in the ****format config as an image. ex:-
Data Config for Qr Code:
Attribute
Description
key
The key for the pdf, it is used as a path parameter in URL to identify the PDF that has to be generated.
Content
In this section, the view of pdf is set. What has to appear on pdf is declared here, it is just like creating a static HTML page. The variable which is defined in data config is declared here and place in position as per the requirement. We can also create a table here and set the variable as per requirement.
Style
This section is used to style the component, set the alignment and many more. Basically, it's like a CSS to style the HTML page.
Example of adding footer in PDF (adding page number in the footer)
The position of page number in the footer is configurable. For more detail refer this document Header and Footer
Example of adding Qr Code
Format Config for Qr Code
For Integration with UI, please refer to the links in Reference Docs
Title
Link
PDF Generation service technical documentation
Steps for Integration of PDF in UI for download and print PDF
API Swagger Documentation
Link
pdf-service/v1/_create
pdf-service/v1/_createnosave
pdf-service/v1/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
__
Steps for setting up the environment and running the script file to get a fresh copy of the required Datamart CSV file.
(One Time Setup)
Install Kubectl Step 1: Go through the Kubernetes documentation page to install and configure the kubectl. Following are useful links: Kubernetes Installation Doc Kubernetes Ubuntu Installation
After installing type the below command to check the version install in your system1 kubectl version
Step 2: Install aws-iam-authenticator Installing aws-iam-authenticator - Amazon EKS
Step 3: After installing, you need access to a particular environment cluster.
Go to $HOME/.Kube folder
1cd 2cd .kube
Open the config file and replace the content with the environment cluster config file. (Config file will be attached)1gedit config
Copy-paste the content from the config file provided to this config file opened and save the file.
2. Exec into the pod1kubectl exec --stdin --tty playground-584d866dcc-cr5zf -n playground -- /bin/bash
(Replace the pod name depending on what data you want.
Refer to Table 1.2 for more information)
3. Install Python and check to see if it installed correctly1apt install python3.8 2python --version
4. Install pip and check to see if it installed correctly1apt install python3-pip 2pip3 --version
5. Install psycopg2 and Pandas1pip3 install psycopg2-binary pandas
Note: If this doesn’t work then try this command1pip3 install --upgrade pip
and running the #5 command again
(Every time you want a datamart with the latest data available in the pods)
1. Sending the python script to the pod1tar cf - /home/priyanka/Desktop/mcollect.py | kubectl exec -i -n playground playground-584d866dcc-cr5zf -- tar xf - -C /tmp
Note: Replace the file path (/home/priyanka/Desktop/mcollect.py) with your own file path (/home/user_name/Desktop/script_name.py)
Note: Replace the pod name depending on what data you want.
(Refer to Table 1.2 for more information on pod names)
2. Exec into the pod1kubectl exec --stdin --tty playground-584d866dcc-cr5zf -n playground -- /bin/bash
(Note: Replace the pod name depending on what data you want.1kubectl exec --stdin --tty <your_pod_name> -n playground -- /bin/bash
Refer to Table 1.2 for more information)
3. Move into tmp directory and then move into the directory your script was in1cd tmp 2cd home/priyanka/Desktop
for example :1cd home/<your_username>/Desktop
4. List the files there1ls
(Python script file should be present here)
(Refer Table 1.1 for the list of script file names for each module)
5. Run the python script file1python3 ws.py
(name of the python script file will change depending on the module)
(Refer Table 1.1 for the list of script file names for each module)
6. Outside the pod shell, In your home directory run this command to copy the CSV file/files to your desired location1kubectl cp playground/playground-584d866dcc-cr5zf:/tmp/mcollectDatamart.csv /home/priyanka/Desktop/mcollectDatamart.csv
(The list of CSV file names for each module will be mentioned below)
7. The reported CSV file is ready to use.
Jupyter
Excel
Using jupyter will be command-based.
Will take some time getting used to it.
Ease of Use with the Graphical User Interface (GUI). Learning formulas is fairly easier.
Jupyter requires python language for data analysis hence a steeper learning curve.
Negligible previous knowledge is required.
Equipped to handle lots of data really quickly. With the bonus of ease of accessibility to databases like Postgres and Mysql where actual data is stored.
Excel can only handle so much data. Scalability becomes difficult and messy.
More Data = Slower Results
Summary:
Python is harder to learn because you have to download many packages and set the correct development environment on your computer. However, it provides a big leg up when working with big data and creating repeatable, automatable analyses, and in-depth visualizations.
Summary:
Excel is best when doing small and one-time analyses or creating basic visualizations quickly. It is easy to become an intermediate user relatively without too much experience dueo its GUI.
Watch this video
OR
Follow these steps ->
(One Time Setup)
Install Python and check to see if it installed correctly
1apt install python3.8 2python --version
Install pip and check to see if it installed correctly1apt install python3-pip 2pip3 --version
3. Install jupyter1pip3 install notebook
(Whenever you want to run Jupyter lab)
To run jupyter lab
1jupyter notebook
2. To open a new notebook
New -> Python3 notebook
3. To open an existing notebook
Select File -> Open
Go to the directory where your sample notebook is.
Select that notebook (Ex: sample.pynb)
Opening an existing notebook
After opening
Module Name
Script File Name (With Links)
Datamart CSV File Name
Datamart CSV File Name
PT
ptDatamart.csv
W&S
waterDatamart.csv
sewerageDatamart.csv
PGR
pgrDatamart.csv
mCollect
mcollectDatamart.csv
TL
tlDatamart.csv
tlrenewDatamart.csv
Fire Noc
fnDatamart.csv
OBPS (Bpa)
bpaDatamart.csv
Module Name
Pod Name
Description
PT
playground-865db67c64-tfdrk
Punjab Prod Data in UAT Environment
W&S
playground-584d866dcc-cr5zf
QA Data
PGR
Local Data
Data Dump
mCollect
playground-584d866dcc-cr5zf
QA Data
TL
playground-584d866dcc-cr5zf
QA Data
Fire Noc
playground-584d866dcc-cr5zf
QA Data
OBPS (Bpa)
playground-584d866dcc-cr5zf
QA Data
Format Config file: This config file define the format of PDF. In format config, we define the UI structure ex: css, layout etc. for pdf as per PDFMake syntax of pdf. In PDF UI, the places where values are to be picked from the request body are written as “{{variableName}}” as per ‘mustache.js’ standard and are replaced by this templating engine. ex: https://github.com/egovernments/configs/tree/master/pdf-service/format-config - Connect to preview
Data Config file: This file contains a mapping to pick data from request body, external service call body if there is any and the variable which defines where this value is to be replaced in format by the templating engines (mustache.js). The variable which is declared in the format config file must be defined in the data config file. ex: https://github.com/egovernments/configs/tree/master/pdf-service/data-config - Connect to preview
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.