Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Persister Service persists data in the database in a sync manner providing very low latency. The queries which have to be used to insert/update data in the database are written in yaml file. The values which have to be inserted are extracted from the json using jsonPaths defined in the same yaml configuration. Below is a sample configuration which inserts data in a couple of tables.
The above configuration is used to insert data published on the kafka topic save-pgr-request in the tables eg_pgr_service_v2 and eg_pgr_address_v2. Similarly, the configuration can be written to update data. Following is a sample configuration:
The above configuration is used to update the data in tables. Similarly, the upsert operation can be done using ON CONFLICT() function in psql. Following table describe each field in the configuration.
Variable Name
Description
serviceName
The module name to which the configuration belongs
version
Version of the config
description
Detailed description of the operations performed by the config
fromTopic
Kafka topic from which data has to be persisted in DB
isTransaction
Flag to enable/disable perform operations in Transaction fashion
query
Prepared Statements to insert/update data in DB
basePath
JsonPath of the object that has to be inserted/updated.
jsonPath
JsonPath of the fields that has to be inserted in table columns
type
Type of field
dbType
DB Type of the column in which field is to be inserted
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Details coming soon...
Through SMS and Emails necessary information/updates are communicated to the users on their various transactions on DIGIT applications. For example, when a Trade License application is initiated or forwarded or approved or payment is done in DIGIT system, the applicant and payer (if the payer is other than the applicant) will be informed about the status of Trade License application through SMS/Email. The language for SMS and Email can be set as per requirement/choice.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
User can receive Emails and SMS of necessary information/updates in the decided language.
The language can be decided by the end-users (either Citizen or Employee). End-users can select the language before logging in or after logging, from inbox page.
If the language is not chosen by end-user, then SMS/Email is received in the language of, State requirement based state-level configured language.
Sms and Email localization should be pushed to the database through the endpoints for all the languages added in the system. Localization format for SMS/Email
Sample of SMS localisation for Trade License application initiation English localization
Hindi localization
The placeholder <1>,<2>,<3> will the replaced by the actual required value which gives important information to the applicant. For example**,** the message will be received by the applicant as: Dear Kamal, Your Trade License application number for Ramjhula Provisional Store has been generated. Your application no. is UK-TL-2020-07-10-002058 You can use this application number….
The default language for SMS and Email can be set by
Clicking on the preferred language from the available language button, in language selection page, which opens before the login page.
In Citizen or Employee inbox page, the language can be selected from the drop-down, which can be seen in the right corner of the inbox title bar.
If the language is not chosen by Citizen or Employee, then SMS/Email is received in default configured language. For example in a State if Hindi, English, Kannada are added as three languages in the system and out of these three languages if State decides that Kannada should be configured as default language then Kannada is set as the default language in MDMS. So when end-user does not choose any language then SMS/Email is sent in Kannada language.
The selected language key is sent as a parameter along with other required transaction parameters to the back end code.
In the back end, to send SMS/Email logic, language key is checked and based on the language key and SMS unique key, the message is fetched from the database.
Always define the Yaml for your APIs as the first thing using Open API 3 Standard (https://swagger.io/specification/)
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in the POST body only
If query params for search need to be supported then make sure to have the same parameters in POST body also and POST body should take priority over query params
Provide additional Details objects for _create and _update APIs so that the custom requirements can use these fields
Each API should have a RequestInfo object in request body at the top level
Each API should have a ResponseInfo object in response body at the top level
Mandatory fields should be minimum for the APIs.
minLength and maxLength should be defined for each attribute
Read-only fields should be called out
Use common models already available in the platform in your APIs. Ex -
For receiving files in an API, don’t use binary file data. Instead, accept the file store ids
If there is only one file to be uploaded and no persistence is needed, and no additional json data is to be posted, you can consider using direct file upload instead of using filestore id
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following these principles.
Digit system supports multiple languages. To add a new language, it should be configured in MDMS.
Before proceeding with the configuration, following are the pre-requisites -
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
Users can view the web page of digit application in the language of their choice by selecting it from the available languages.
SMS and Emails of information about the transactions on digit application, can be received in languages based on the selection.
After adding the new language, the MDMS service needs to be restarted to read the newly added data.
A new language is added in StateInfo.json In MDMS, file StateInfo.json, under common-masters folder holds the details of language to be added.
The label’s text is displayed in UI for language selection. The value text is used as key to refer the language.
Language is added as an array element under the array named “languages”. Each language element is a label and value pair. By default English language is added. Other languages can be added as an additional/new language which system will support. System to support multiple ie., more than one language, those languages are added in StateInfo.json as below.
"हिंदी" and "ಕನ್ನಡ",”language3” are more than one languages(Hindi,Kannada,somelangauge) added other than "ENGLISH".
In UI the labels and master values that populates in dropdown or textboxes are added as a key for localization. For eg., when a user logs in, at the top of inbox page, a welcome message in English language shows as “Welcome User name“. The text “Welcome” is English localization for the Key “CS_LANDING_PAGE_WELCOME_TEXT”.
For all the labels or master value keys, localization should be pushed to the database through the endpoints for all the languages added in system.The SMS/Email are also added as keys for which values are pushed in all the languages to the data base.
Localization format for keys
Sample of localization
In Hindi language
In English language
For the languages added in the system if values are not pushed to database then for the labels or master data, key will appear in UI. If values for SMS/Email is missed to pushed the SMS/Email can’t be received.
Any one language from the multiple added language, can be set as default. For example if English, Hindi, Kannada are three languages added in the StateInfo.json and kannada is required to be set as a default language then in StateInfo.json for the text "defaultLanguage" the language key is need to be set as its value.
Title
Link
StateInfo.json
This section walks you through the steps to adding a new language or setting up the default language on the DIGIT system.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This section contains docs that walk you through the various steps required to configure DIGIT services.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Indexer uses a config file per module to store all the configurations pertaining to that module. Indexer reads multiple such files at start-up to support indexing for all the configured modules. In config we define source and, destination elastic search index name, custom mappings for data transformation and mappings for data enrichment. Below is the sample configuration for indexing TL application creation data into elastic search.
Variable Name
Description
serviceName
Name of the module to which this configuration belongs.
summary
Summary of the module.
version
Version of the configuration.
mappings
List of definitions within the module. Every definition corresponds to one index requirement. Which means, every object received onto the kafka queue can be used to create multiple indexes, each of these indexes will need configuration, all such configurations belonging to one topic forms one entry in the mappings list. The keys listed henceforth together form one definition and multiple such definitions are part of this mappings key.
topic
The topic on which the data is to be received to activate this particular configuration.
configKey
Key to identify to what type of job is this config for. values: INDEX, REINDEX, LEGACYINDEX. INDEX: LiveIndex, REINDEX: Reindex, LEGACYINDEX: LegacyIndex.
indexes
Key to configure multiple index configurations for the data received on a particular topic. Multiple indexes based on a different requirement can be created using the same object.
name
Index name on the elastic search. (Index will be created if it doesn't exist with this name.)
type
Document type within that index to which the index json has to go. (Elasticsearch uses the structure of index/type/docId to locate any file within index/type with id = docId)
id
Takes comma-separated JsonPaths. The JSONPath is applied on the record received on the queue, the values hence obtained are appended and used as ID for the record.
isBulk
Boolean key to identify whether the JSON received on the Queue is from a Bulk API. In simple words, whether the JSON contains a list at the top level.
jsonPath
Key to be used in case of indexing a part of the input JSON and in case of indexing a custom json where the values for custom json are to be fetched from this part of the input.
timeStampField
JSONPath of the field in the input which can be used to obtain the timestamp of the input.
fieldsToBeMasked
A list of JSONPaths of the fields of the input to be masked in the index.
customJsonMapping
Key to be used while building an entirely different object using the input JSON on the queue
indexMapping
A skeleton/mapping of the JSON that is to be indexed. Note that, this JSON must always contain a key called "Data" at the top-level and the custom mapping begins within this key. This is only a convention to smoothen dashboarding on Kibana when data from multiple indexes have to be fetched for a single dashboard.
fieldMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that has to be mapped to the fields of the index json which is mentioned in the key 'indexMapping' in the config.
inJsonPath
JSONPath of the field from the input.
outJsonPath
JSONPath of the field of the index json.
externalUriMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be enriched using APIs from the external services. The configuration for those APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
queryParam
Configuration of the query params to be used for the API call. It is a comma-separated key-value pair, where the key is the parameter name as per the API contract and value is the JSONPath of the field to be equated against this parameter.
apiRequest
Request Body of the API. (Since we only use _search APIs, it should be only RequestInfo.)
uriResponseMapping
Contains a list of configuration. Each configuration contains two keys: One is a JSONPath to identify the field from response, Second is also a JSONPath to map the response field to a field of the index json mentioned in the key 'indexMapping'.
mdmsMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be denormalized using APIs from the MDMS service. The configuration for those MDMS APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
moduleName
Module Name from MDMS.
masterName
Master Name from MDMS.
tenantId
Tenant id to be used.
filter
Filter to be applied to the data to be fetched.
filterMapping
Maps the field of input json to variables in the filter
variable
Variable in the filter
valueJsonpath
JSONPath of the input to be mapped to the variable.
To use the generic GET/POST SMS gateway, first, configure the service application properties
sms.provider.class=Generic
This will set the generic interface to be used. This is the default implementation, which can work with most of the SMS Provider. The generic implementation supports below
GET or POST based API
Supports query params, form data, JSON Body
To configure the URL of the SMS provider use sms.provider.url property.
To configure the http method used configure the sms.provider.requestType property to either GET or POST.
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively
To configure which data needs to be sent to the API below property can be configured:
sms.config.map={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map={'extraParam': 'abc'}
sms.extra.config.map is not used currently and is only kept for custom implementation which requires data that doesn't need to be directly passed to the REST API call
sms.config.map is a map of parameters and their values
Special variables that are mapped
$username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in environment variable with full upper case and _ replacing -, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}. Then the API call will be passed <url>?u=<$username>&p=password
Message success delivery can be controlled using below properties
sms.verify.response (default: false)
sms.print.response (default: false)
sms.verify.responseContains
sms.success.codes (default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true and sms.verify.responseContains to the text that should be contained in the response
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using the below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a , separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX will blacklist any phone number starting with 5, or the exact number 9999999999 and all numbers starting from 8888888800 to 8888888899
Few 3rd parties require a prefix of 0 or 91 or +91 with the mobile number. In such a case you can use sms.mobile.prefix to automatically add the prefix to the mobile number coming in the message queue.
Workflow is defined as a sequence of tasks that has to be performed on an application/Entity to process it. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration. We will discuss how to create this configuration for a new product in this document.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2 service is up and running
Role-Action mapping are added for business Service API’s
Create and modify workflow configuration according to the product requirements
Configure State level as well BusinessService level SLA to efficiently track the progress of the application
Control access to perform actions through configuration
Attribute Name
Description
tenantId
The tenantId (ULB code) for which the workflow configuration is defined
businessService
The name of the workflow
business
The name of the module which uses this workflow configuration
businessServiceSla
The overall SLA to process the application (in milliseconds)
state
Name of the state
applicationStatus
Status of the application when in the given state
docUploadRequired
Boolean flag representing if document are required to enter the state
isStartState
Boolean flag representing if the state can be used as starting state in workflow
isTerminateState
Boolean flag representing if the state is the leaf node or end state in the workflow configuration. (No Actions can be taken on states with this flag as true)
isStateUpdatable
Boolean flag representing whether data can be updated in the application when taking action on the state
currentState
The current state on which action can be performed
nextState
The resultant state after action is performed
roles
A list containing the roles which can perform the actions
auditDetails
Contains fields to audit edits on the data. (createdTime, createdBy,lastModifiedTIme,lastModifiedby)
Deploy the latest version of egov-workflow-v2 service
Add businessService persister yaml path in persister configuration
Add Role-Action mapping for BusinessService API’s
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
The Workflow configuration has 3 levels of hierarchy: a. BusinessService b. State c. Action The top-level object is BusinessService, it contains fields describing the workflow and list of States that are part of the workflow. The businessService can be defined at tenant level like pb.amritsar or at the state level like pb. All objects maintain an audit sub-object which keeps track of who is creating and updating and the time of it.
Each State object is a valid status for the application. The State object contains the information of the state and what actions can be performed on it.
The action object is the last object in the hierarchy, it defines the name of the action and the roles that can perform the action.
The workflow should always start from the null state as the service treats new applications as having null as the initial state. eg:
In action object whatever nextState is defined, the application will be sent to that state. It can be to another forward state or even some backward state from where the application has already passed (generally, such actions are named SENDBACK)
SENDBACKTOCITIZEN is a special keyword for action name. This action sends back the application to the citizen’s inbox for him to take action. A new State should be created on which Citizen can take action and should be the nextState of this action. While calling this action from module assignees should be enriched by the module with the uuids of the owners of the application
For integration-related steps please refer to the document Setting Up Workflows.
Title
Link
Workflow Service Documentation
Setting Up Workflows
Link
_create
_update
_search
(Note: All the API’s are in the same postman collection therefore same link is added in each row)
__
Every Service integrated with egov-workflow-v2 service needs to first define the workflow configuration which describes the states in the workflow, the action that can be taken on this states, who all can perform those action, SLA etc. This configuration is created using API’s and is stored in DB. The configuration can be created either state level or tenant level based on the requirements.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2 service is up and running
Role Action mapping is added for the BusinessService API’s
Create and modify workflow configuration
Configure State level as well BusinessService level SLA
Control access to workflow actions from the configuration
Validates if the flow defined in the configuration is complete during the creation
Deploy the latest version of egov-workflow-v2 service
Add Role-Action mapping for BusinessService API’s (preferably add _create and update only for SUPERUSER. search can be added for CITIZEN and required employee roles like TL__CEMP etc. )
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Add businessService persister yaml path in persister configuration
Create the businessService JSON based on product requirement. Following is a sample json of a simple 2 step workflow where an application can be applied by citizen or counter employee and then can be either rejected or approved by the approver.
Once the businessService json is created add it in the request body of _create API of workflow and call the API to create the workflow.
To update the workflow first search the workflow object using _search API and then make changes in the businessService object and then call _update using the modified search result. (States cannot be removed using _update API as it will leave applications in that state in an invalid state. In such cases first, all the applications in that state should be moved forward or backward state and then the state should be disabled through DB directly)
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient and increase accountability.
Integrating with workflow service provides a way to have a dynamic workflow configuration which can be easily modified according to the changing requirements. The modules don’t have to deal with any validations regarding workflow such as authorisation of the user to take an action if documents are required to be uploaded at certain stage etc. as they will be automatically handled by egov-workflow-v2 service based on the configuration defined. It also automatically keeps updating SLA for all applications which provide a way to track the time taken by an application to get processed.
To integrate, host of egov-workflow-v2 should be overwritten in helm chart
/egov-workflow-v2/egov-wf/businessservice/_search should be added as the endpoint for searching workflow configuration. (Other endpoints are not required once workflow configuration is created)
The configuration can be fetched by calling _search API
Title
Link
Workflow Service Documentation
Link
_create
_update
_search
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
__
Through report service, useful data get shown for a specific module based on some given criteria like date, locality, financial year, etc.
For example, PT dump report of property tax service you have to select from date to date, financial year etc and based on the criteria we can see all the data full filling the criteria. In the response we see all the details of a property which is paid between the given from date and to date, if we selected financial year then we can see the property which is paid for that specific financial year.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permissions to edit the git repository where Reports are configured and knowledge on YAML.
Prior Knowledge of YAML.
Prior Knowledge of SQL queries.
Prior Knowledge of the relation between the tables for which module you are going to write a report.
User can write queries (like SQL queries) for fetching the real-time data to display in a UI application.
User can apply filters like from date, to date, financial year, etc based on the report configuration.
User can download the result in PDF and XLS format.
User can select or deselect the columns user wants to see.
User can choose the number of records he/she wants to see on a page.
Once the changes have been done in the report configuration file we have to restart the report service so the report service will read the new configuration.
<Module Name>=file:///work-dir/configs/reports/config/<report file name>.yml
ex: pgr=file:///work-dir/configs/reports/config/pgr-reports.yml
Write the report configuration. Once it is done commit those changes.
Add the role and actions for the new report.
Restart the MDMS and report service.
A decision support system (DSS) is a composite tool that collects, organizes, and analyzes business data to facilitate quality decision-making for management, operations, and planning. A well-designed DSS aids decision-makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives, and business models. DSS analysis helps organizations to identify and solve problems, and make decisions
This document explains the steps on how to define the configurations & set up for the new dashboard in the DSS.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Spring boot
Prior Knowledge of Kafka
Prior Knowledge of Elastic Search
Prior Knowledge of Kibana
Prior Knowledge of EQL (Elastic Query Language)
Prior Knowledge of JSON
Creating a DSS dashboard schema
DSS ingest service APIs
Ingest service configurations
Creating Kafka sync connector to push the data to Elastic search
When we are going to start indexing the DSS collection v2 index. We should create the schema in the ES using the Kibana query as there in the below file.
2. DSS ingest service API
3. Ingest service configurations
Transform collection schema for V2
This transform collection v1 configuration file is used to map with the incoming data. This mapped data will go inside the data object in the DSS collection v2 index.
Here: $i, the variable value that gets incremented for the number of records of paymentDetails.
$j, the variable value that gets incremented for the number of records of billDetails.
Enrichment Domain Configuration
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
Topic Context Configuration
Topic Context Configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service is asked to receive those data and pass it through the pipeline, the context and the version of the data being received has to be set. This configuration is used to identify as in which Kafka topic consumed the data and what is the mapping for that.
JOLT Domain Transformation Schema
JOLT is a JSON to JSON Transformation Library. In order to change the structure of the data and transform it in a generic way, JOLT has been used.
While the transformation schemas are written for each Data Context, the data is transformed against the schema to obtain transformed data.
Validator Schema
Validator Schema is a configuration Schema Library from Everit Bypassing the data against this schema, it ensures whether the data abides by the rules and requirements of the schema which has been defined.
Enhance Domain configuration
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked and the final data is placed inside the domain object.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
For Kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
If DSS collection index data is indexing directly ( without Kafka connector) to ES through the ingest pipeline then, make the application properties or in environments, direct push must be enabled.
es.push.direct=true
4. Creating a Kafka sync connector to push the data to the Elasticsearch
Configure the Kafka topics in the environments or Ingest pipeline application properties as shown below.
To Start the indexing we will create a connecter that will take data from the topic and push it to the index we have mentioned in the "transforms.TopicNameRouter.replacement" and mention the ES host in the Kafka connection we have to mention the host URL in “connection.url”.
To create the Kafka connector run the below curl command inside the playground pod:
Rainmaker has report framework to configure new reports. As part of the report configuration, we have to write a native SQL query to get the required data for the report. So if the query takes huge time to execute or query result has huge data, then it will impact on the whole application performance.
The following cases where we can see the application performance issue because of heavy reports.
Filtering with long date range data or applying fewer filters which in turns return huge data
Join the multiple tables for getting required data and missing creating index on join columns
Implementing conditional logic inside the queries itself
Writing multiple sub-queries inside a single query for getting required data
Because of heavy reports, the following things will impact the platform
When we execute a complex query on the database, thread from connection pool will block to execute the query
When threads from connection pool are blocked completely, the application will become very slow for incoming requests
When max request timeout is crossed, API gateway will return timeout error, But still, connection thread on the database is active, Then all these types of idle threads will occupy database resources like memory, CPU which in turns increase the load on the database
Some times when running huge queries, because of time taken by the query will lead to broken pipe issue which causes more memory leaks and out of heap memory type issues. Because of this, the service will frequently restart automatically.
If a query returns huge data, the browser will become unresponsive and application will become unresponsive.
This documentation talks about building a new dashboard in the DSS and also it defines the configurations required for the analytics service. Analytics microservice which is responsible for building, fetching, aggregating, and computing the data on ElasticSearch to a consumable data response. Which shall be later used for visualizations and graphical representations.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge on JSON
Prior Knowledge on Elasticsearch Query Language
Prior Knowledge on Kibana
DSS setup
Adding new Roles for Dashboards
Adding a new Dashboard
Adding new Visualizations in existing Dashboard
Adding new charts for visualizations :
To add a new role, We have to make changes in the RoleDashboardMappingsConf.json (roles node) configuration file has to be modified as below. In the roles array, every JSON object is unique based on the id. The name of the role is defined in the roleName attribute.
If we want to assign any dashboard to a particular role, Add the id and name of the dashboard in the dashboards array. This dashboard id is unique and it’s referred to as the MasterDashboardConfig.json file configuration.
Any number of roles & dashboards can be added
Below is a sample to add a new role object
2. Adding a new Dashboard
To add a new dashboard, We have to make changes in the MasterDashboardConfig.json (dashboards node) that has to be modified as below.
Add the new JSON object in the dashboards array. Add the dashboard name in the name attribute, Id should be unique, which is used for assigning a role for the dashboard. We will talk about visualizations below.
Dashboards array add a new dashboard as given below
To add new visualizations, We have to make changes again in the MasterDashboardConfig.json (vizArray node) that has to be modified as below. Add the visualization name to the name attribute. We will add all the visualizations in the vizArray array.vizArray will contain the name of the visualization, vizType as visual type, noUnit, and charts.
charts array contains chart API configuration query details. The id is referred to as the chartapiconfig.json file’s key to fetch the required data from elastic search’s index. And the name attribute is referred to as the name of the chart in localization.
To add a new chart, chartApiConf.json has to be modified as shown below. A new chartid (key of the JSON) has to be added with the chart node object. In the chartid JSON contains the chart name, chart type, valueType, documentType, aggregationPaths and queries attribute.
Types of the chart: Metric, Pie, Line, Table, and xtable
AggregationPaths: Query result will take from this path.
valueType: Based on the value type result will be shown in the UI. Different types of valueType are Amount, percentage, and number.
queries array will contain the information of the module, requestQueryMap (request param of the API), dateRefField (Based on this field date data will be filtered), indexName, and aggrQuery. We can add multiple modules queries in a single chart.
For more information please refer the reference documents listed below.
Configuring Report for a module requires adding the required report configuration as per the standard format and with the minimum development time.
UI can have different types of filters such as date, dropdown etc.. and even the sum of a column can also be easily displayed in UI. Pagination and downloading the report in pdf format, xls format is already present in the report UI.
Type of Reports which can be configured :
Count of applications
Statewide collections
Application status
Cancelled receipts
Migrated records / Data entry records
Limitation of this framework is for reports having requirements with complex queries with multiple joins as the report uses the query to fetch the data from the database, It is resource-intensive and response might be slow in those scenarios.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permissions to edit the git repository to add the report configuration
User with permissions to add action and role action in the mdms
Showcase the data in the required and cleaner format.
The UI is rendered with the help of configuration in the report and there is no extra effort in building UI for different reports.
For Implementation specific report requirements, customization is easy and turn around time is less.
After adding the new report/ editing existing report configuration in the respective module, the report service needs to be restarted.
Create a reports.yml file and add report configuration as per standard format.
Add the action and role action in the mdms.
Add the github raw path of the report.yml file in the report.config file
This section provides a step by step guide to setting up workflows and configuring the workflows for DIGIT entities.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Roles define the permissions of a user to perform a group of tasks. For example for a Trade License application initiate, forward, approve or payment are tasks which require permission. User assigned with role Citizen or Counter Employee can perform initiation and payment. TL Document Verifier can forward the application and the only user assigned with the role named TLApprover can approve the application.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
With Roles, permission to perform a certain task can be restricted based on the requirement. For example, only user with Role TLApprover can approve the Trade License initiated application.
While creating an employee in the system from HRMS Admin, the roles can be assigned to the employees based on the requirement. The roles added in mdms will show for “roles drop down” in employee create screen.
In digit system workflow for a module can be implemented based on roles. For example for Trade License module a Trade License application workflows as per the role is like: CounterEmployee/Citizen>TLDocVerifier>TLApprover>CounterEmployee/Citizen Trade License application workflow based on roles:
After adding the new role, the MDMS service needs to be restarted to read the newly added data.
Roles are added in roles.json In MDMS, file roles.json, under ACCESSCONTROL-ROLES folder roles are added. Sample roles:
A role is added as an array element under the array named “roles”.
Each role is defined with three key-value pairs. keys are “code”, ”name” and “description”.
Localization needs to be pushed for all the roles added in roles.json
Sample Localization for roles In English:
In Hindi:
code "code": "ACCESSCONTROL_ROLES_ROLES_TL_CEMP", is the localization key for role. The key has three parts: a) ACCESSCONTROL_ROLES : It is folder and module name of MDMS, file roles.json in which roles are added. Hypen (- ) in name "ACCESSCONTROL-ROLES" is replaced with underscore ( _ ). b) ROLES : It is the role.json file name and array name under which roles as array elements are added. c)TL_CEMP : It is the unique role code.
If localization is not pushed for the roles then the key will appear in UI.
The objective of this functionality is to provide a mechanism to trigger action on applications that satisfy certain predefined criteria. Looking at sample use cases provided by the product team, the majority of use cases can be summarised as performing action ‘X’ on applications in state ‘Y’ and have exceeded the state SLA by ‘Z’ days. We can write one query builder which takes this state ‘Y’ and SLA exceeded by ‘Z’ as search params and then we can perform action X on the search response. This has been achieved by defining an MDMS config like below:
In the above configuration, we define the condition for triggering the escalation of applications. The above configuration triggers escalation for applications in RESOLVED
state which have exceeded stateSLA by more than 1.0
day and it will trigger the escalation by performing CLOSERESOLVEDCOMPLAIN
on the applications. Once the applications are escalated the processInstances are pushed on the pgr-auto-escalation
topic. We have done a sample implementation for pgr-services, where we have updated persister configuration to listen on this topic and update the complaint status accordingly.
The auto-escalation for businessService PGR
will be triggered when the following API is called:
Note that the businessService is a path param. (For example, if the escalation has to be done for tl-services NewTL workflow the URL will be 'http://egov-workflow-v2.egov:8080/egov-workflow-v2/egov-wf/auto/NewTL/_escalate
').
These APIs have to be configured in cron job config so that they can be triggered periodically according to the requirements. Only users with role AUTO_ESCALATE
can trigger auto-escalation so first create users with statelevel AUTO_ESCALATE
roles first and then add that user in the userInfo of the requestInfo. This step has to be done because cron job does internal API calls and zuul won’t enrich the userInfo.
For setting up autoescalation trigger the workflow also needs to be updated. For example to add auto escalate trigger on RESOLVED
state with action CLOSERESOLVEDCOMPLAIN
in PGR
businessService, we will have to search the businessService add the following action in actions array of RESOLVED
state and call update API
Suppose an application gets auto-escalated from state ‘X' to state 'Y’, employees can look at these escalated applications through the escalate search API. The following sample cURL can be used to search auto-escalated applications of PGR module belonging to Amritsar tenant -
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
To add a new report first add the file path in the reportFileLocationsv1[ (In this file, the path of the report configuration files get stored).
Once file path is added in the file reportFileLocationsv1, go to the folder /configs/reports/config [ Create a new file and name the file what you have given in file reportFileLocationsv1.
All content on this page by is licensed under a .
Micro Service which runs as a pipeline and manages to validate, transform and enrich the incoming data and pushes the same to ElasticSearch Index. Ingest service will fetch the data from the index(
paymentsindex-v1) which is specified in the indexing service API as below. The ingest service will read the configuration files which are there with v1. All the configuration files will be there .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
vizArray is to hold multiple visualizations
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
1
CounterEmployee
Initiates the TL application for Citizen from counter. Initiated TL application goes to TLDocVerifier inbox.
2
Citizen
Initiates the TL application. Initiated TL application goes to TLDocVerifier inbox.
3
TLDocVerifier
User with role TLDocVerifier can forward or reject the TL application after verifying the initiated application. The rejected application shows for re-submission in initiator inbox. The forwarded application goes to TLApprover inbox.
4
TLApprover
TLApprover can approve or reject based on the requirement. The rejected application goes back to TLDocVerifer for re-verification. The approved application shows for payment pending in initiator inbox.
5
CounterEmployee
Once the initiated application is approved by the user with role TLApprover, CounterEmployee can do the payment and download the receipt.
6
Citizen
Once the initiated application is approved by the user with role TLApprover, Citizen can do the payment and download the receipt.
1
code
Alphanumeric
64
Yes
A unique code that identifies the user role name.
2
name
Text
256
Yes
The Name indicates the User Role while creating an employee a role can be assigned to an individual employee.
3
description
Text
256
No
A short narration provided to the user role name.
Title
Link
Sample roles.json
Reference link
Title
Link
DSS Backend Configuration Manual
DSS Dashboard - Technical Document for UI
DSS Technical Documentation
Title
Link
DSS Backend Configuration Manual
DSS Dashboard - Technical Document for UI
DSS Technical Documentation
Details coming soon...
Details coming soon...
Title
Link
report config folder
Title
Link
Sample report.yml file
Sample report.config file
Roles define the permissions of a user to perform a group of tasks. The tasks are created as API calls to do certain actions when a request for those calls is sent by the system. Access permission is grated by mapping roles with API. User assigned with the roles to provide access for the API
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
Knowledge on how to create a new API.
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following those principles
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in POST body only
For further more information about how new API is developed could be referred in this link API Do's and Don'ts
Adding New APIs(actions) and Mapping Roles with that APIs provides permission to perform certain task can be restricted based on the requirement.
After mapping Roles with APIs, the MDMS service needs to be restarted to read the newly added data.
APIs are added in actions-test.json and called as action. In MDMS, file actions-test.json, under ACCESSCONTROL-ACTIONS-TEST folder APIs are added.
API Sample -
APIs are added as action array element with the request url and other required details for the array "actions-test"
Each action is defined as a key-value pair:
1
id
Numeric
Yes
A unique id that identifies action.
2
name
Text
No
A short narration provided to the action.
3
url
Text
Yes
It is the endpoint of API or type like url or card.
4
displayName
Text
No
It is the display name.
5
orderNumber
Numeric
Yes
A number to represent order to display in UI
6
parentModule
Text
No
Code of the service referred to as parent
7
enabled
boolean
Yes
To enable or disable display in UI.
8
serviceCode
Text
No
Code of the service to which API belongs.
9
code
Text
No
10
path
Text
No
11
navigationUrl
Text
Yes
Url to navigate in UI
12
leftIcon
Icon
No
13
rightIcon
Icon
No
Roles are added in roles.json In MDMS, file roles.json, under ACCESSCONTROL-ROLES folder roles are added. More about roles can be checked in the below link: Adding roles to System
Mapping of Roles and APIs/action is added in roleactions.json, under the folder ACCESSCONTROL-ROLEACTIONS. Sample mapping:
Role and API/action mapping is added as an array element under array roleactions. Each mapping is defined with key-value pairs. keys are rolecode, actionid, actioncode and tenantId.
1
rolecode
Yes
The unique code of the role which is defined in roles.json and which required mapping for API.
2
actionid
Yes
The unique id of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
3
actioncode
No
The code of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
4
tenantid
Yes
tenant id of state.
Title
Link
Sample actions-test.json
Sample roles.json
Sample roleactions.json Roles APIs mapping
In this document, we are coming to learn how to upload apk to play store and available to the end user to download and use from play store.
Before starting the process of upload the apk to play store the following requirements are a must.
Make sure that signed apk (signed apk has key generated, which is used to release different versions of the apk ) is generated for the application that you want to upload to play store.
Make sure that you have account for google play console by agreeing to terms and conditions, also payment shd have been done for the account and is ready for uploading an apk to play store.
two screenshots of your app and they must be at least 320 pixels wide and be in a PNG or JPEG format.
You must also add your high resolution app icon. It must be 512 by 512 pixels and it must be in 32-bit PNG format. This icon will be visible on the Google Play app’s page and in search results.
Next, a Feature Graphic image, which will be visible at the top of the Google Play app’s page. This image must be 1024 by 500 pixels, and maybe in JPEG or 24-bit PNG format.
Also, prepare a small description about app in four to five lines.
The use of deploying the apk to the play store is to enable the user to download the apk from the play store and use whenever needed. By uploading the apk to play store our app will be available to all end-users around the world just on the fingertips.
Now, we are going to learn step by step procedure of uploading apk to play store.
Open google play console by entering the url (https://play.google.com/apps/publish/) and log in with the user credentials.
After login in the following screen can be seen.
Now on the top-right click on create Application button and you get a popup to enter the title of the apk. Refer the screenshot below and click on create.
Under product details section, enter the description that we have prepared in the beginning.
Under the assets section, we need to attach at-least two screenshots of the application, high-resolution thumbnail icon and Feature Graphic image.
Under Categorization, select the application type, category
Coming to the contact section, add website URL, email and also phone number if you wish to add one.
After next comes the privacy policy section where you can enter the link of the privacy and policy page and save it as a draft.
After saving as a draft on the “right side menu” select the option “App Release”. In the app release page under the “production Track” click on the manage, then click on create a release in the next screen. Then in the next page under the “ App signing by google play” click the continue.
In the next page, under android app bundles and APK to add section add your APK generated also enter the release name and add the description inside the <en-US> tag related to that APK and save the entered data.
After that click on the “calculate Rating” then in the click “apply Rating Button”.
Next will be the “Pricing and Distribution” in the “Right side Menu”. In this page we have the option to select the cost of APK to download or free, also select the countries app needs to be available and also answer the questionnaires that are asked and click on the save Drafts.
Finally, again go the “App Release” in the right side menu and click on the “ Edit Release” button in the Production Track section and save the details and at the end click “Start role out to production” and click confirm.
The following screen acknowledges your process is ended.
That is all about uploading APK to the play store. You can check the status of the application on the right side menu under the “ All Application”. It takes some hours to appear in the play store. Wait for the APK to appear in the play store. We can also check the details of the APK in the Dashboard.
Notification service can notify the user through SMS and email for there action on DIGIT as an acknowledgement that their action has been successfully completed.
ex: actions like property create, TL create, etc.
To send SMS we need the help of 2 services, one on which the user is taking action and the second SMS service.
To send an email we need the help of 2 services, one on which the user is taking action and second email service.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Spring boot.
Prior Knowledge of Kafka.
Prior Knowledge of localization service.
For a specific action of the user, he/she will get a SMS and email as an acknowledgment.
Users can get SMS and email based on the localization language.
If you want to take action for a specific action on that action the service has to listen to a particular topic so that each time any record comes to the topic consumer will know that action has been taken and can trigger a notification for it.
ex: if you want to trigger a notification for Property create then the property service’s NotificationConsumer class should listen to topic egov.pt.assessment.create.topic so that each time any record comes to the topic egov.pt.assessment.create.topic NotificationConsumer will know that Property creates action that has been taken and can trigger a notification for it.
when any record comes into the topic first the service will fetch all the required data like user name, property id, mobile number, tenant id, etc from the record which we fetched from the topic.
Then we will fetch the message contain from localization and the service replaces the placeholders with the actual data.
Then put the record in SMS topic in which SMS service is listening.
email service is also listening to the same topic which SMS service is listening.
Title
Link
NotificationConsumer
Roles define the permissions of a user to perform a group of tasks. The tasks are created as API calls to do certain actions when a request for those calls is sent by the system. For example, the key tasks for a Trade License application include initiate/apply, forward, approve or payment. For Trade License initiate two API calls, “create” and “update” is required. Create API creates and save the application in the database and return an application number. Update API saves the required attached documents in the file store and return the success acknowledgement message of the application created. These create and update API access permission is granted to the roles named Citizen and TL counter employee. Access permission is grated by mapping roles with API. User assigned with the roles Citizen or TL counter employee can initiate/apply the Trade License application.
Before proceeding with the configuration, make sure the following pre-requisites are met -
Knowledge of DIGIT applications is required.
User should be aware of transactional steps in the DIGIT application.
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permissions to edit the git repository where MDMS data is configured.
Mapping Roles with APIs, permission to perform a certain task can be restricted based on the requirement. For example, only the user with Role TL Counter Employee or Citizen can initiate the Trade License applications.
After mapping Roles with APIs, the MDMS service needs to be restarted to read the newly added data.
APIs are added in actions-test.json and called as action. In MDMS, file actions-test.json, under ACCESSCONTROL-ACTIONS-TEST folder APIs are added. API Sample:
APIs are added as action array element with the request url and other required details for the array "actions-test"
Each action is defined as a key-value pair:
1
id
Numeric
Yes
A unique id that identifies an action.
2
name
Text
No
A short narration provided to the action.
3
url
Text
Yes
It is the request URL of API call.
4
displayName
Text
No
It is the display name.
5
enabled
boolean
Yes
To enable or disable display in UI.
6
servicecode
Text
No
Code of the service to which API belongs.
Roles are added in roles.json In MDMS, file roles.json, under ACCESSCONTROL-ROLES folder roles are added. More about roles can be checked in the below link: Adding roles to System
Mapping of Roles and APIs/action is added in roleactions.json, under the folder ACCESSCONTROL-ROLEACTIONS. Sample mapping:
Role and API/action mapping is added as an array element under array roleactions. Each mapping is defined with key-value pairs. keys are rolecode, actionid, actioncode and tenantId.
1
rolecode
Yes
The unique code of the role which is defined in roles.json and which required mapping for API.
2
actionid
Yes
The unique id of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
3
actioncode
No
The code of the API/action which is defined in actions-test.json and which is required to be mapped with the role.
4
tenantid
Yes
tenant id of state.
Title
Link
Sample actions-test.json
Sample roles.json
Sample roleactions.json Roles APIs mapping
The main reason to Setup Base Product Localization is because Digit system supports multiple languages. By setting-up Localization, we can have multiple language support to the UI. So, that user can easily understand the Digit Operations
Before you proceed with the configuration, make sure the following pre-requisites are met -
Before Starting the Localization setup one should have knowledge on React and eGov FrameWork.
Before setting-up Localization, make sure that all the Keys are pushed to the Create API and also get prepared with the Values that need to be added to the Localization key specific to particular languages that are being added in the product.
Make sure where to add the Localization in the Code.
Once the Localization is done, the user can view the Digit Screens in their own language to complete the whole application process easier as digit gives the user to select the language of their choice.
Once The key is added to the code as per requirement, Deployment can be done in the same way, how the code is being deployed.
Select a label which is needed to be localized from the Product code. Here is the example code for a header before setting-up Localization.
As we see the above which supports only the English language, To setup Localization to that header we need to the code in the following manner.
we can see below code is added when we compare with code before Localization setup.
{
labelName: "Trade Unit ",
labelKey: "TL_NEW_TRADE_DETAILS_TRADE_UNIT_HEADER"
},
Here the Values to the Key can be added by two methods either by using the localization Screen which is Developed Recently or by updating the values to the keys to create API using the postman application.
Title
Link
Adding New Language to Digit System. You can refer the link provided for how languages are added in DIGIT
Steps for setting up the environment and running the script file to get a fresh copy of the required Datamart CSV file.
(One Time Setup)
Install Kubectl Step 1: Go through the Kubernetes documentation page to install and configure the kubectl. Following are useful links:
After installing type the below command to check the version install in your system1 kubectl version
Step 2: Install aws-iam-authenticator
Step 3: After installing, you need access to a particular environment cluster.
Go to $HOME/.Kube folder
1cd 2cd .kube
Open the config file and replace the content with the environment cluster config file. (Config file will be attached)1gedit config
Copy-paste the content from the config file provided to this config file opened and save the file.
2. Exec into the pod1kubectl exec --stdin --tty playground-584d866dcc-cr5zf -n playground -- /bin/bash
(Replace the pod name depending on what data you want.
Refer to Table 1.2 for more information)
3. Install Python and check to see if it installed correctly1apt install python3.8 2python --version
4. Install pip and check to see if it installed correctly1apt install python3-pip 2pip3 --version
5. Install psycopg2 and Pandas1pip3 install psycopg2-binary pandas
Note: If this doesn’t work then try this command1pip3 install --upgrade pip
and running the #5 command again
(Every time you want a datamart with the latest data available in the pods)
1. Sending the python script to the pod1tar cf - /home/priyanka/Desktop/mcollect.py | kubectl exec -i -n playground playground-584d866dcc-cr5zf -- tar xf - -C /tmp
Note: Replace the file path (/home/priyanka/Desktop/mcollect.py) with your own file path (/home/user_name/Desktop/script_name.py)
Note: Replace the pod name depending on what data you want.
(Refer to Table 1.2 for more information on pod names)
2. Exec into the pod1kubectl exec --stdin --tty playground-584d866dcc-cr5zf -n playground -- /bin/bash
(Note: Replace the pod name depending on what data you want.1kubectl exec --stdin --tty <your_pod_name> -n playground -- /bin/bash
Refer to Table 1.2 for more information)
3. Move into tmp directory and then move into the directory your script was in1cd tmp 2cd home/priyanka/Desktop
for example :1cd home/<your_username>/Desktop
4. List the files there1ls
(Python script file should be present here)
(Refer Table 1.1 for the list of script file names for each module)
5. Run the python script file1python3 ws.py
(name of the python script file will change depending on the module)
(Refer Table 1.1 for the list of script file names for each module)
6. Outside the pod shell, In your home directory run this command to copy the CSV file/files to your desired location1kubectl cp playground/playground-584d866dcc-cr5zf:/tmp/mcollectDatamart.csv /home/priyanka/Desktop/mcollectDatamart.csv
(The list of CSV file names for each module will be mentioned below)
7. The reported CSV file is ready to use.
Watch this video
OR
Follow these steps ->
(One Time Setup)
Install Python and check to see if it installed correctly
1apt install python3.8 2python --version
Install pip and check to see if it installed correctly1apt install python3-pip 2pip3 --version
3. Install jupyter1pip3 install notebook
(Whenever you want to run Jupyter lab)
To run jupyter lab
1jupyter notebook
2. To open a new notebook
New -> Python3 notebook
3. To open an existing notebook
Select File -> Open
Go to the directory where your sample notebook is.
Select that notebook (Ex: sample.pynb)
Opening an existing notebook
After opening
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
After clicking the create you will be redirected to the page where we need to enter the product details, graphics assets, Categorization ..etc.
Now in the right side menu to the “content rating” and click on continue button which will redirect to “Welcome to the Content Rating Rating Questionnaire page”, where we need to enter the email id and also select your app category in provided categories and fill all the Questionnaire in the form that comes after selecting the app category and click on the “save Questionnaire”, you will receive an email after clicking on the “save Questionnaire”.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
Jupyter
Excel
Using jupyter will be command-based.
Will take some time getting used to it.
Ease of Use with the Graphical User Interface (GUI). Learning formulas is fairly easier.
Jupyter requires python language for data analysis hence a steeper learning curve.
Negligible previous knowledge is required.
Equipped to handle lots of data really quickly. With the bonus of ease of accessibility to databases like Postgres and Mysql where actual data is stored.
Excel can only handle so much data. Scalability becomes difficult and messy.
More Data = Slower Results
Summary:
Python is harder to learn because you have to download many packages and set the correct development environment on your computer. However, it provides a big leg up when working with big data and creating repeatable, automatable analyses, and in-depth visualizations.
Summary:
Excel is best when doing small and one-time analyses or creating basic visualizations quickly. It is easy to become an intermediate user relatively without too much experience dueo its GUI.
Module Name
Script File Name (With Links)
Datamart CSV File Name
Datamart CSV File Name
PT
ptDatamart.csv
W&S
waterDatamart.csv
sewerageDatamart.csv
PGR
pgrDatamart.csv
mCollect
mcollectDatamart.csv
TL
tlDatamart.csv
tlrenewDatamart.csv
Fire Noc
fnDatamart.csv
OBPS (Bpa)
bpaDatamart.csv
Module Name
Pod Name
Description
PT
playground-865db67c64-tfdrk
Punjab Prod Data in UAT Environment
W&S
playground-584d866dcc-cr5zf
QA Data
PGR
Local Data
Data Dump
mCollect
playground-584d866dcc-cr5zf
QA Data
TL
playground-584d866dcc-cr5zf
QA Data
Fire Noc
playground-584d866dcc-cr5zf
QA Data
OBPS (Bpa)
playground-584d866dcc-cr5zf
QA Data
The objective of PDF generation service is to bulk generate pdf as per requirement. This document contains details about how to create the config files which are required to generate new pdf.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of JavaScript.
Prior knowledge of Node.js platform.
JSONPath for filtering required data from json objects.
Provide flexibility to customise the PDF as per the requirement.
Supports localisation.
Provide functionality to add an image, Qr Code in PDF.
Provide functionality to call external service for creating PDF with external service response
Create data config and format config for a PDF according to product requirement.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment yml file
Deploy the latest version of pdf-service in a particular environment.
Config file: A json config file which contains the configuration for pdf requirement. For any pdf requirements, we have to add two configs file to the service.
PDF generation service read these such files at start-up to support PDF generation for all configured module.
Attribute
Description
key
The key for the pdf, it is used as a path parameter in URL to identify for which PDF has to generate.
baseKeyPath
The json path for the array object that we need to process.
entityIdPath
The json path for the unique field which is stored in DB. And that unique ****field value is mapped to file-store id, so we can directly search the pdf which was created earlier with the unique field value and there will be no need to create PDF again.
Direct Mapping
In direct mapping, we define the variable whose value can be fetched from the array object which we extracted using baseKeyPath.
ExternalApi Mapping
The externalApi mapping is used only if there is a need for values from other service response. In externalApi mapping, API endpoint has to be set properly with the correct query parameter.
Derived mapping
In derived mapping, the estimation of variable characterized here is equivalent to the esteem acquired from the arithmetic operation between the direct mapping variable and externalApi mapping variable.
Qr code Config mapping
This mapping is used to draw QR codes in the PDFs. The text to be shown after scan can be a combination of static text and variables from direct and externalApi mappings.
Sample structure of variable definition in data config
Example to show date in PDF
If the format field is not specified in date variable declaration then in PDF date is shown with the default format of DD/MM/YYYY. For more details refer this page Unix-Timestamp
Example of external API calling to MDMS service
Example of adding Qr Code
For adding Qr code there is separate mapping with the name “qrcodeConfig“ in data config. This mapping can use variables defined in “direct” and “external“ mappings along with the ****static text. The information on the QR Code scan will be defined as value. The variable defined in this mapping can directly be used in the ****format config as an image. ex:-
Data Config for Qr Code:
Attribute
Description
key
The key for the pdf, it is used as a path parameter in URL to identify the PDF that has to be generated.
Content
In this section, the view of pdf is set. What has to appear on pdf is declared here, it is just like creating a static HTML page. The variable which is defined in data config is declared here and place in position as per the requirement. We can also create a table here and set the variable as per requirement.
Style
This section is used to style the component, set the alignment and many more. Basically, it's like a CSS to style the HTML page.
Example of adding footer in PDF (adding page number in the footer)
The position of page number in the footer is configurable. For more detail refer this document Header and Footer
Example of adding Qr Code
Format Config for Qr Code
For Integration with UI, please refer to the links in Reference Docs
Title
Link
PDF Generation service technical documentation
Steps for Integration of PDF in UI for download and print PDF
API Swagger Documentation
Link
pdf-service/v1/_create
pdf-service/v1/_createnosave
pdf-service/v1/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
__
The objective of PDF generation service is to bulk generate pdf as per requirement.
Before you proceed with the documentation, make sure the following pre-requisites are met -
All required data and format file path is added in the environment yml file
pdf-service is up and running
Provide functionality to download and print PDF’s
Provide functionality to download and print bulk PDF’s
Create data config and format config for a PDF according to product requirement.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment yml file
Deploy the latest version of pdf-service in a particular environment.
For Configuration details please refer to the Customizing PDF Receipts & Certificates document in Reference Docs
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be printed/downloaded by the user.
Functionality to generate PDFs in bulk
Avoid regeneration
Support QR codes and Images
Functionality to specify a maximum number of records to be written in one PDF
Uploading generated PDF to filestore and return filestore id for easy access
The following are the steps for integrating TL certificate in UI.
In footer.js file which is present in /frontend/web/rainmaker/dev-packages/egov-tradelicence-dev/src/ui-config/screens/specs/tradelicence/applyResource , Create two object (download and print object) in footerReview function.
Example
In tlCertificateDownloadObject give the proper label name and key for the pdf. In the link function get the object whose mapping is required for PDF, in this case, we want a license object. Call the function downloadCertificateForm (details about this function is described in the next step). Add icon details which we want to use in UI to represent that option. The same thing for tlcertificatePrintObject only difference is we have to call generateReceipt function. Again create the same two object with similar content in downloadPrintContainer function.
Mention the function name “downloadCertificateForm“ and “generateReceipt“ in import , because the functions is define in /frontend/web/rainmaker/dev-packages/egov-tradelicence-dev/src/ui-config/screens/specs/utils/index.js and /frontend/web/rainmaker/dev-packages/egov-tradelicence-dev/src/ui-config/screens/specs/utils/receiptPDF.js
In index.js define the function which is responsible for calling the Create API of PDF service to create respective PDF. In that function, you have to mention the tenant ID and proper key value which is the same as the key mentioned in the data and format config. Also mentioned the URL : /pdf-service/v1/_create and action as get and also call the function downloadReceiptFromFilestoreID which is responsible to call filestore service with filestoreid and return the URL for pdf.
Example of function downloadCertificateForm
Example of function generateReceipt
Title
Link
PDF Generation service technical documentation
Customizing PDF Receipts & Certificates
API Swagger Documentation
Link
pdf-service/v1/_create
pdf-service/v1/_createnosave
pdf-service/v1/_search
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
__
Format Config file: This config file define the format of PDF. In format config, we define the UI structure ex: css, layout etc. for pdf as per PDFMake syntax of pdf. In PDF UI, the places where values are to be picked from the request body are written as “{{variableName}}” as per ‘mustache.js’ standard and are replaced by this templating engine. ex: https://github.com/egovernments/configs/tree/master/pdf-service/format-config - Connect to preview
Data Config file: This file contains a mapping to pick data from request body, external service call body if there is any and the variable which defines where this value is to be replaced in format by the templating engines (mustache.js). The variable which is declared in the format config file must be defined in the data config file. ex: https://github.com/egovernments/configs/tree/master/pdf-service/data-config - Connect to preview
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.