Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Workflows are a series of steps that moves a process from one state to another state by actions performed by different kind of Actors - Humans, Machines, Time based events etc. to achieve a goal like onboarding an employee, or approve an application or grant a resource etc. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has workflow persister config path added in it
PSQL server is running and a database is created to store workflow configuration and data
Always allow anyone with a role in the workflow state machine to view the workflow instances and comment on it
On the creation of workflow, it will appear in the inbox of all employees that have roles that can perform any state transitioning actions in this state.
Once an instance is marked to an individual employee it will appear only in that employee's inbox although point 1 will still hold true and all others participating in the workflow can still search it and act if they have necessary action available to them
If the instance is marked to a person who cannot perform any state transitioning action, they can still comment/upload and mark to anyone else.
Overall SLA: SLA for the complete processing of the application/Entity
State-level SLA: SLA for a particular state in the workflow
Deploy the latest version of egov-workflow-v2 service
Add businessService persister yaml path in persister configuration
Add Role-Action mapping for BusinessService APIs
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Create businessService (workflow configuration) according to product requirements
Add Role-Action mapping for /processInstance/_search API
Add workflow persister yaml path in persister configuration
For Configuration details please refer to the links in Reference Docs
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient too and increase accountability.
Role-based workflow
An easy way of writing rule
File movement within workflow roles
To integrate, host of egov-workflow-v2 should be overwritten in helm chart
/process/_search should be added as the search endpoint for searching workflow process Instance object.
/process/_transition should be added to perform an action on an application. (It’s for internal use in modules and should not be added in Role-Action mapping)
The workflow configuration can be fetched by calling _search API to check if data can be updated or not in the current state
(Note: All the APIs are in the same postman collection, therefore, the same link is added in each row)
The objective of the PDF generation service is to bulk generate pdf as per requirement.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Install npm.
Kafka server is up and running.
egov-persister service is running and has pdf generation persister config path added to it.
PSQL server is running and the database is created to store the filestore id and job id of generated pdf.
Provide a common framework to generate PDFs.
Provide flexibility to customise the PDF as per the requirement.
Provide functionality to add an image, Qr Code in PDF.
Provide functionality to generate pdf in bulk.
Provide functionality to specify the maximum number of records to be written in one PDF.
Create data config and format config for a PDF according to product requirements.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment .yml file
Deploy the latest version of pdf-service in a particular environment.
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be printed/downloaded by the user.
Functionality to generate PDFs in bulk.
Avoid regeneration.
Support QR codes and Images.
Functionality to specify the maximum number of records to be written in one PDF.
Uploading generated PDF to filestore and return filestore id for easy access.
To download and print the required PDF _create API has to be called with the required key (For integration with UI, please refer to the links in Reference Docs)
(Note: All the APIs are in the same postman collection, therefore, the same link is added in each row)
A core application which provides location details of the tenant for which the services are being provided.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
PSQL server is running and a database is created
Knowledge of egov-mdms service
egov-mdms service is running and all the required mdms master are loaded in it
The location information is also known as boundary data of ULB
Boundary data can be of different hierarchies ADMIN, ELECTION hierarchy which is defined by the Administrators, Revenue hierarchy defined by the Revenue department.
The election hierarchy has the locations divided into several types like zone, election ward, block, street and locality. The Revenue hierarchy has the locations divided into a zone, ward, block and locality.
The model which defines the localities like zone, ward and etc is a boundary object which contains information like name, lat, long, parent or children boundary if any. The boundaries come under each other in a hierarchy - a zone contains wards, a ward contains blocks, and a block contains locality. The order in which the boundaries are contained in each other will differ based on the tenants.
Environmental Variables | Description |
---|
Add/Update the mdms master file which contains boundary data of ULB’s.
Add Role-Action mapping for egov-location APIs.
Deploy/Redeploy the latest version of egov-mdms service.
Fill the above environment variables in egov-location with proper values.
Deploy the latest version of egov-location service.
The boundary data has been moved to mdms from the master tables in DB. The location service fetches the JSON from mdms and parses it to the structure of boundary object as mentioned above. A sample master would look like below.
The egov-location API’s can be used by any module which needs to store the location details of the tenant.
Get the boundary details based on boundary type and hierarchy type within the tenant boundary structure.
Get the geographical boundaries by providing appropriate GeoJson.
Get the tenant list in the given latitude and longitude.
To integrate, host of egov-location should be overwritten in helm chart.
/boundarys/_search should be added as the search endpoint for searching boundary details based on tenant Id, Boundary Type, Hierarchy Type etc.
/geography/_search should be added as the search endpoint .This method handles all requests related to geographical boundaries by providing appropriate GeoJson and other associated data based on tenantId or lat/long etc.
/tenant/_search should be added as the search endpoint. This method tries to resolve a given lat, long to a corresponding tenant, provided there exists a mapping between the reverse geocoded city to tenant.
The MDMS Tenant boundary master file should be loaded in MDMS service.
Title | Link |
---|---|
Title | Link |
---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Environmental Variables | Description |
---|
PDFMake: ( ):- for generating PDFs
Mustache.js: ( ):- as templating engine to populate format as defined in format config, from request json based on mappings defined in data config
For Configuration details please refer to
Description | Link |
---|
Description | Link |
---|
All content on this page by is licensed under a .
Attribute Name | Description |
---|
Title | Link |
---|
Title | Link |
---|
Please refer to the for egov-location service to understand the structure of APIs and to have a visualisation of all internal APIs.
All content on this page by is licensed under a .
Environment Variables
Description
egov.wf.default.offset
The default value of offset in search
egov.wf.default.limit
The default value of limit in search
egov.wf.max.limit
The maximum number of records that are returned in search response
egov.wf.inbox.assignedonly
Boolean flag if set to true default search will return records assigned to the user only, if false it will return all the records based on the user’s role. (default search is the search call when no query params are sent and based on the RequestInfo of the call, records are returned, it’s used to show applications in employee inbox)
egov.wf.statelevel
Boolean flag set to true if a state-level workflow is required
Configuring Workflows For New Product/Entity
Setting Up Workflows
API Swagger Documentation
Migration to Workflow 2.0
/businessservice/_create
/businessservice/_update
/businessservice/_search
/process/_transition
/process/_search
MAX_NUMBER_PAGES | Maximum number of records to be written in one PDF |
DATE_TIMEZONE | Date timezone which will be used to convert epoch timestamp into date (DD/MM/YYYY) |
DEFAULT_LOCALISATION_LOCALE | Default value of localisation locale |
DEFAULT_LOCALISATION_TENANT | Default value of localisation tenant |
DATA_CONFIG_URLS | File path/URL'S of data config |
FORMAT_CONFIG_URLS | File path/URL'S of format config |
egov.services.egov_mdms.hostname | Host name for MDMS service. |
egov.services.egov_mdms.searchpath | MDMS Search URL. |
egov.service.egov.mdms.moduleName | MDMS module which contain boundary master. |
egov.service.egov.mdms.masterName | MDMS master file which contain boundary detail. |
tenantId | The tenantId (ULB code) for which the boundary data configuration is defined. |
moduleName | The name of the module where TenantBoundary master is present. |
TenantBoundary.hierarchyType.code | Unique code of the hierarchy type. |
TenantBoundary.hierarchyType.name | Unique name of the hierarchy type. |
TenantBoundary.boundary.id | Id of boundary defined for particular hierarchy. |
boundaryNum | Sequence number of boundary attribute defined for the particular hierarchy. |
name | Name of the boundary like Block 1 or Zone 1 or City name. |
localname | Local name of the boundary. |
longitude | Longitude of the boundary. |
latitude | Latitude of the boundary. |
label | Label of the boundary. |
code | Code of the boundary. |
children | Details of its sub-boundaries. |
Core Services is one of the key DIGIT components. Browse through this section to learn more about the key configuration and integration details of these core services.
This section contains the configuration documents related to the DIGIT service stack.
Click on the respective service link below to find its configuration details and additional information resources.
Customizing PDF Receipts & Certificates |
Steps for Integration of PDF in UI for download and print PDF |
API Swagger Documentation |
pdf-service/v1/_create |
pdf-service/v1/_createnosave |
pdf-service/v1/_search |
Local setup |
/boundarys/_search |
/geography/_search |
/tenant/_search |
Indexer service runs as a separate service. This service is designed to perform all the indexing tasks of the digit platform. The service reads records posted on specific Kafka topics and picks the corresponding index configuration from the yaml file provided by the respective module. The objective of the Indexer Service is listed below.
To provide a one-stop framework for indexing the data to elasticsearch.
To create a provision for indexing live data, reindexing from one index to the other and indexing legacy data from the data store.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of Elasticsearch
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Performs three major tasks namely: LiveIndex, Reindex and LegacyIndex.
LiveIndex: Task of indexing the live transaction data on the platform. This keeps the es data in sync with the database.
Reindex: Task of indexing data from one index to the other. ES already provides this feature, indexer does the same but with data transformation.
LegacyIndex: Task of indexing legacy data from the tables to ES.
Provides flexibility to index the entire object, a part of the object or an entirely different custom object all using one input json from modules.
Provides features for customizing index json by field mapping, field masking, data enrichment through external APIs and data denormalization using MDMS.
One-stop shop for all the es index requirements with easy-to-write and easy-to-maintain configuration files.
Designed as a consumer to save API overhead. The consumer configs are written from scratch to have complete control over consumer behaviour.
Step 1: Write the configuration as per your requirement. The structure of the config file is explained later in the same doc.
Step 2: Check in the config file to a remote location, preferably Github. Currently, the files are checked into the folder here - https://github.com/egovernments/configs/tree/DEV/egov-indexer -for dev
Step 3: Provide the absolute path of the checked-in file to DevOps. This file path is added to the file-read path of the egov-indexer. The file is added to the egov-indexer's environment manifest file for it to be read at the start-up of the application.
Step 4: Run the egov-indexer app, Since it is a consumer, it starts listening to the configured topics and indexes the data.
For Indexer Configuration, please refer to the document in the reference docs table given below.
a) POST /{key}/_index
Receive data and index. There should be a mapping with the topic as {key} in index config files.
b) POST /_reindex
This is used to migrate data from one index to another index
c) POST /_legacyindex
This is to run the LegacyIndex job to index data from the database. In the request body, the URL of the service that will be called by the indexer service to pick the data must be mentioned.
In legacy indexing and for collection-service records LiveIndex kafka-connect is used to do part of pushing records to elastic search. For more details please refer to the document mentioned in the document list.
Goal: To onboard developers onto the XState-Chatbot code base so that they can modify existing flows or create new ones.
This document sticks to explaining the chatbot's core features and does not dive into the use cases implemented by the chatbot.
This chatbot solves the basic form-filling aspect of a chat flow. By collecting the information from the user, an API call is made to the rainmaker backend services to fulfil the user requirements. It uses the concept of StateCharts (similar to State Machines) to maintain the state of the user in a chat flow and store the information provided by the user. XState is a JavaScript implementation of StateCharts. All chat flows are coded inside the XState framework.
This chatbot does not have any Natural Language Processing component. In the future, we can extend the chatbot to add such features.
XState is a JavaScript implementation of StateCharts. There is detailed documentation available to study XState. Some concepts of XState used in the Chatbot are listed below. Basic knowledge of these concepts is necessary. It can also be learned while going through the chat flow implementation of pilot use cases in PGR and Bills.
Actions
onEntry
Few tips about using XState. These have been followed throughout the pilot chat flows.
To move to any state which is not at the same hierarchical level, assign a unique ID value. If it has an ID value, address it using the # qualifier in the target attribute.
Since the ID should be unique, make sure there are no multiple states having the same ID value. If there is a duplicate, the application will not function as expected.
Any actions (like onEntry) should be surrounded by assign.
This includes almost all functions except the guard condition code snippets.
NodeJS
PostgreSQL
Kafka_(optional)_
Build a chat flow to facilitate a user to interact with rainmaker modules
Link a chat flow with backend services
Deploy the latest version of xstate-chatbot
Configure /xstate-chatbot to be a whitelisted open endpoint in zuul
Add indexer-config to the egov-indexer to index all the telemetry messages
Other configuration details are mentioned in the XState-Chatbot Integration Document.
All the interactions with the user - sending a message to the user and processing an incoming message from the user are coded as a state in the State Machine. It would be a nice start to test any chat flow with the supplementary react-app provided for the developers to execute the state machine locally. (Please follow the guidelines in the README of the react-app.)
We have applied some standard patterns to code any chat interaction. Please try to follow these patterns to code any new chat flow. These patterns are explained below. You can also study those by browsing through the code of the pilot use cases of PGR and Bills.
The chat states would only include dialogue-specific code. Any code related to the backend service should be written as a part of a separate …-service.js file.
Any code that does not include any asynchronous API call can be written as a part of the onEntry function or action.
If the function needs to make an API call, that would have to be written with the invoke-on Done pattern. The asynchronous function should be written as a part of the service file. The consolidated data returned by it can be processed in the state of the dialogue file.
Helper functions are written indialog.js
file. It is advised to use those functions as much as possible rather than writing any custom logic in dialogue files.
Apart from the chat flow and its backend service API calls, a few other components are present in the project. These components do NOT need to be modified to code any new chat flow or changed for an existing chat flow. These components with a short description for each are listed below:
Session Manager: It manages the sessions of all the users on a server. It will store the user’s state in a datastore, update it, and read it when any new message is received on the server. Based on the state of the user, it creates a state machine and sends the incoming message event to the state machine. It sanctifies the state (any sensitive data like the name and mobile number of a user are removed) before storing the state in the datastore.
Repository: It is the datastore where the states of the users get stored. To reduce dependency, an in-memory repository is also provided, which can be used by configuring an environment variable. So to run the chatbot service, PostgreSQL isn’t a hard dependency, but it is advisable to use the PostgreSQL repo provider.
Channel Provider: There can be many different WhatsApp Providers. Any one of the providers will be configured to be used. A separate console
WhatsApp Provider is present for the developer to test the chatbot server locally. Postman collection to mimic receiving messages from a user to the server is present in the project directory.
Localization: Every message to be sent to the user is stored within the chatbot. Localization service is not being used. These messages are present near the bottom of the dialogue files. A separate localization-service.js is provided to get the messages for the localization codes for the messages that are not owned by the chatbot. For example, the PGR complaint types data is under the ownership of the PGR module, and the messages for such can be fetched from the egov-localization-service using the functions provided in the localization-service.js.
Service Provider: To ease the initial dialogue development, instead of coding API calls to the backend services, we can configure the chat flow to use a dummy service. This can be configured using an environment variable and modifying the service-loader.js
file.
Telemetry: Chatbot logs telemetry events to a Kafka topic. (Any sensitive data will get masked before indexing the events onto ElasticSearch by egov-indexer.) The following events get logged:
Incoming message
Outgoing message
Transition of state
DIGIT is an API-based platform where each API denotes a DIGIT resource. Access Control Service's (ACS) primary job is to authorise end-user based on their roles and provide access to the DIGIT platform resources. Access control functionality basically works based on the below points:
Actions: Actions are events which are performed by a user. This can be an API end-point or a Frontend event. This is the MDMS master.
Roles: Roles are assigned to the user, a user can hold multiple roles. Roles are defined in MDMS masters.
Role-Action: Role action is the mapping between actions and roles. Based on the role, the action mapping access control service identifies applicable actions for the role.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
MDMS service is up and running
Serve the applicable actions for a user based on user role (To print menu three).
On each action which is performed by a user, access control looks at the roles for the user and validates actions mapping with the role.
Support tenant-level role-action. For instance, an employee from Amritsar can have the role of APPROVER for other ULBs like Jalandhar and hence will be authorised to act as APPROVER in Jalandhar.
Deploy the latest version of the Access Control Service
Deploy MDMS service to fetch the Role Action Mappings
Define the roles
Add the Actions (URL)
Add the role action mapping
(The details about the fields in the configuration can be found in the swagger contract)
Any microservice which requires authorisation can leverage the functionalities provided by the access control service.
Any new microservice that is to be added to the platform won’t have to worry about authorisation. It can just add its role action mapping in the master data and Access Control Service will perform authorisation whenever API for the microservice is called.
To integrate with Access Control Service the role action mapping has to be configured(added) in the MDMS service.
The service needs to call /actions/_authorize API of Access Control Service to check for authorisation of any request
User service is responsible for user data management and providing functionality to login and logout into the DIGIT system
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
Encryption and MDMS services are running
PSQL server is running and a database
Redis is running
Store, update and search user data
Provide authentication
Provide login and logout functionality into the DIGIT platform
Store user data PIIs in encrypted form
Setup latest version of egov-enc-service and egov-mdms- service
Deploy the latest version of egov-user service
Add Role-Action mapping for APIs
The following application properties file in user service are configurable.
User data management and functionality to login and logout into Digit system using OTP and password.
Providing the following functionality to citizen and employee-type users
Employee:
User registration
Search user
Update user details
Forgot password
Change password
User role mapping(Single ULB to multiple roles)
Enable employees to login into the DIGIT system based on a password.
Citizen:
Create user
Update user
Search user
User registration using OTP
OTP based login
To integrate, the host of egov-user should be overwritten in the helm chart.
Use /citizen/_create and /users/_createnovalidate endpoints for creating users into the system
Use /v1/_search and /_search endpoints to search users in the system depending on various search parameters
Use /profile/_update for partial update and /users/_updatenovalidate for update
Use /password/nologin/_update for otp based password reset and /password/_update for logged in user password reset
Use /user/oauth/token for generating token, /_logoutfor logout and /_details for getting user information from his token
eGov Payment Gateway acts as a liaison between eGov apps and external payment gateways facilitating payments, reconciliation of payments and lookup of transactions status.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has pg service persister config path added in it
PSQL server is running and the database is created to store transaction data.
Create or initiate a transaction, to make a payment against a bill.
Make payment for multiple bill details [multi module] for a single consumer code at once.
Transactions are initiated with a call to the transaction/_create API - various validations are carried out to ensure the sanctity of the request.
The response includes a generated transaction id and a redirect URL to the payment gateway itself.
Various validations are carried out to verify the authenticity of the request and the status is updated accordingly. If the transaction is successful, a receipt is generated for the same.
Reconciliation is carried out by two jobs scheduled via a Quartz clustered scheduler.
The early reconciliation job is set to run every 15 minutes [configurable via app properties] and aims at reconciling transactions created 15 - 30 minutes ago and are in a PENDING state.
The daily reconciliation job is set to run once every day and aims at reconciling all transactions that are in the PENDING state, except for the ones created 30 minutes ago.
Axis, Phonepe and Paytm payment gateways are implemented.
The following properties in the application.properties file in egov-pg-service have to be added and set to default value after integrating with the new payment gateway. In the below table properties for AXIS bank, payment gateway is shown the same relevant property needs to be added for other payment gateways.
Deploy the latest version of egov-pg-service
Add pg service persister yaml path in persister configuration
The egov-pg-service acts as communication/contact between eGov apps and external payment gateways.
Record of every transaction against a bill.
Record of payment for multiple bill details for a single consumer code at once.
To integrate, host of egov-pg-service should be overwritten in helm chart
/pg-service/transaction/v1/_create should be added in the module to initiate a new payment transaction, on successful validation
/pg-service/transaction/v1/_update should be added as the update endpoint to updates an existing payment transaction. This endpoint is issued only by payment gateways to update the status of payments. It verifies the authenticity of the request with the payment gateway and forwards all query params received from a payment gateway
/pg-service/transaction/v1/_search should be added as the search endpoint for retrieving the current status of a payment in our system.
(Note: All the APIs are in the same postman collection, therefore, the same link is added in each row)
The consumer sometimes needs additional amounts (Amendments) added to their bill due to reasons from outside of the system. The addition of amounts happens with respect to the consumer code of the entity in the product(PT, WS, etc..,), any unpaid demand in the system is a candidate for amendments.
Prior knowledge of billing-service in the DIGIT framework.
Amendment mainly works with two types of functionality as follows:
Amendment
Demand
Bill Amendment provides a separate flow that triggers the workflow for validating the process of adding additional amounts to existing demands. This validation was earlier available only to the respective modules. An amendment is allowed only when there is a need to add or reduce the amount from the existing bill belonging to an entity. The reasons for such cases could be:
Court case settlement
One time waiver
Write-offs
DCB correction (old demands in paid status)
Property tax remission
Criteria:
Below are certain prerequisites to creating an amendment,
presence of demand in the billing system
any one of the reasons listed above
valid document proof for the reason
there is no other amendment already in the workflow
Procedure:
The process of adding an amendment in specific scenarios is given below.
There are two scenarios on how an amendment is completed based on the paid status of the existing demands in the system.
1. when demand is unpaid/partially paid
Create a demand (Or an existing demand can be used) with demand detail → DD1.
Do not pay the bill or make a partial payment.
Create an amendment for the same consumer code (with demand detail → DD2).
Approve the amendment - the response should return an amendment with the status CONSUMED.
Search the demand or fetch the bill for the consumer code. The demand/bill should contain demand details of the demand and amendment together with DD1 and DD2 in the same demand/bill.
2. when demand is completely paid
Create demand and make complete payment or choose a consumer code which is fully paid.
Create amendment (with demand detail → DD1).
Approve amendment - the response should be APPROVED.
Create new demand for the consumer code (with demand detail → DD3). The demand response should contain two demand details DD1 and DD2 saved to the demand.
The amendment search returns CONSUMED status after the demand is created.
IMPACT: Does not impact any other functionality other than adding demand details to demands on APPROVAL.
IMPACTED BY: Existence of demands in the system.
WORKFLOW CONFIG:
Amendment integration helps organizations add additional value to the demand without any change in the system.
Easy to create and simple process of updating demands
Helps ease changes into the system which are not part of normal functionality - Amendment of bills in case of legal requirements.
This is integrated into the billing system by default.
The amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create. The parameter /amendment/_update is used to cancel the created updates or update configured workflows.
{yet to be added}
API Definition
API List
v2 configuration details
The Collection Service serves as a revenue collection platform for all the billing systems through cash, cheque, demand drafts, or the swipe machine. It enables payment for all services provided by the eGov platform at a single point directly from the citizen or over-the-counter collection within municipalities.
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic, etc.
The following services must be up and running:
egov-localization
egov-mdms
egov-idgen
egov-url-shortening
billing-service
Allows citizens to create a payment.
Allows employees to create the payment for the citizen indirectly.
Provides facilities to capture partial and advanced payments based on configs.
Allows payment cancellation to help with scenarios of bad checks and other failed payment scenarios.
Integrates with billing service for demand back-update of payment.
deploy the latest version of the collection-services docker builds.
The MDMS data configuration uses the same data updated by the Billing-Service.
The table below lists the application properties.
Collection service can be integrated with any organization or system that requires a payment system to keep track of its payments. Organizations can customize part of the application or its functionalities based on their requirements.
Easy payments and tracking of payments.
Configurable functionalities according to client requirement
Customers can create a payment using the /payments/_create
Actors on the system can keep track of payments using /payments/_searchendpoint
Once the payment is done and it encounters a technical issue that is beyond the system - the payment can be cancelled with /payments/_workflow
For employees to access the payments API the respective module name should be appended to the payment API path - /payments/PT/_workflow. Here PT refers to the property module.
Doc Links
API List
The main objective of the billing module is to generate the bill for all revenue-based business services. To serve the bill, the Billing-Service requires demand. Demands will be prepared by the revenue modules and stored by billing based on which it generates the Bill.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of KAFKA
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of demand-based systems.
The following services should be up and running:
user
MDMS
Id-Gen
URL-Shortening
notification-sms
eGov billing service creates and maintains demands.
Generates bills based on demands.
Updates the demands from payment when the collection service takes a payment.
Deploy the latest image of the billing service available.
In the MDMS data configuration, the following master data is needed for the functionality of the billing.
MDMS
Business Service JSON
TAX-Head JSON
Tax-Period JSON
Billing service can be integrated with any organization or system that wants a demand-based payment system.
Easy to create and simple process of generating bills from demands
The amalgamation of bills period-wise for a single entity like PT or Water connection.
Amendment of bills in case of legal requirements.
Customers can create a demand using the /demand/_create
Organizations or Systems can search the demand using /demand/_searchendpoint
Once the demand is raised the system can call /demand/_update endpoint to update the demand as per need.
Bills can be generated using, which is a self-managing API that generates a new bill only when the old one expires /bill/_fetchbill.
Bills can be searched using /bill/_search.
Amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create and /amendment/_update can be used to cancel the created ones or update workflow if configured.
Interaction Diagram V1.1:
Doc Links
API List
What is apportioning?
Adjusting the receivable amount with the individual tax head.
Types of apportioning V1.1
Default order-based apportioning(Based on apportioning order adjust the received amount with each tax head).V1.1
Types of apportioning V1.2: (TBD)
Proportionate-based apportioning (Adjust total receivable with all the tax heads equally)
Order & percentage-based apportioning (Adjust total receivable based on order and the percentage which is defined for each tax head).
Principle of apportioning
The basic principle of apportioning holds that if the full amount is paid for any bill then each individual tax head should get nullified with their corresponding adjusted amount.
Example: Case 1: When there are no arrears all tax heads belong to their current purpose.
Example: given below
Case 2: Apportioning with two years of arrear: Example: The apportioning details for the financial year 2014-15 are given below.
The table below illustrates the demand structure generated in case there are no payments for the specified financial year (2015-16).
Whenever any user logs an authorization token and a refresh token is generated for him. Using the auth token the client can make the rest API calls to the server to fetch data. The auth token has an expiry period. Once the auth token is expired it cannot be used to make API calls. The client will have to generate a new authorization token. This is done by authenticating the refresh token with the server which then generates and sends a new authorization token to the client. The refresh token avoids the need for the client to again login whenever Auth token expires.
Refresh token also has an expiry period and once it gets expired it cannot be used to generate a new authorization token and the user will have to log in again to get a new pair of authorization tokens and refresh tokens. Generally, the duration before the expiry of the refresh token is much longer compared to that of auth token. If the user logs out of the account both Auth token and the refresh token will become invalid.
`
In the existing version of the chatbot, for the PGR complaint creation feature, the user has to select the city from a drop-down menu by visiting the mSeva website. This significantly reduces user convenience as the user is required to constantly switch pages. To overcome the above inconvenience, the nlp-engine service is used. The service has an algorithm that uses fuzzy matching and pattern recognition to recognise the city provided by the user as input. Based on the user input, the cities having the highest match ratio with the input are being returned as the output list. A list comprising all the city names in English, Punjabi and Hindi was used as a reference tool for this service.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Python.
egov-mdms service is running and all the data related to the service are added to the MDMS repository.
egov-running service is running.
Provides city fuzzy search feature which returns the list of cities having the highest match ratio with the input.
City fuzzy search can support input data in English, Hindi and Punjabi language.
Provides locality fuzzy search feature which returns the list of localities having the highest match ratio with the input.
Environment Variables | Description |
---|
Deploy the latest version of nlp-engine service.
Whitelist the city and locality fuzzy search APIs.
The nlp-engine service is used to locate user city and locality by using fuzzy string matching and pattern recognition.
Currently integrated into the chatbots for locating user city and locality for complaint creation use case.
This feature functionality can be extended for the other entities and can be used for a fuzzy search of those different entities.
To integrate, the host of nlp-engine service module should be overwritten in the helm chart.
/nlp-engine/fuzzy/city
should be added as the fuzzy search endpoint for a city search.
/nlp-engine/fuzzy/locality
should be added as the fuzzy search endpoint for locality search.
(Note: All the APIs are in the same postman collection therefore the same link is added in each row)
Apportion service is used to apportion the amount paid against a bill among the different tax heads based on the implemented algorithm. The default algorithm uses the order of the tax head to apportion the tax head with the lowest order apportioned off first and the highest order tax head apportioned last.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has apportioned persister config path added to it
PSQL server is running and a database is created to store apportion audit data
Apportion payment in tax heads of bill
Apportion advance amount in tax heads of demand during demand creation
Environmental Variables | Description |
---|
Deploy the latest version of egov-apportion-service
Add apportion persister yaml path in persister configuration
There is no separate configuration required. The TaxHead master that is configured in the billing service is only used
Any payment service which wants to divide the paid amount into different tax head buckets can integrate with apportion service.
Apportions amount in tax heads
To integrate, the host of egov-apportion-service should be overwritten in the helm chart
/apportion-service/v2/bill/_apportion should be called to apportion the bill
/apportion-service/v2/demand/_apportion should be called to apportion the advance amount in demands
(Note: All the APIs are in the same postman collection therefore the same link is added in each row)
Objective: This document aims to facilitate communication between the software developers and whoever is localising the chatbot messages. The goal is to make it clear and unambiguous.
Click on the link to the google sheet below to access all the messages with the codes:
to download the file.
The project is organised such that all the messages are contained within the files present inside the /machine directory. /service directory, which is present inside it, also includes files that could contain localization messages.
Guidelines to be followed by developers:
(According to the standard pattern followed in the project, all the localization messages will be present near the end of the file in a JavaScript object named “messages”.)
Developers will be the ones first filling up the sheet with codes (and the English version of the messages). Below are the guidelines to be followed when writing the codes in the sheet:
The standard separator to be used is .(dot)
The first part is the filename—Eg: “pgr.”, when the filename is pgr.js.
Use “service.” as a prefix when the file is present inside the /service directory.
In the /service directory, filenames are like egov-pgr.js
For localization messages contained in those files, instead of writing “egov-pgr” just write “pgr”
So the prefix for such files would be “service.pgr.”
All the message bundles would be present in the “messages” object near the end of the file. They have been organized in a pattern in the JS object like fileComplaint.complaintType2Step.category.question
The corresponding localization code for such a message bundle in the sheet would be “pgr.fileComplaint.complaintType2Step.category.question”, where the first “pgr.” is added as the prefix for the file name.
Once the localization codes have been written correctly (and the English version of the messages) in the sheet, it is easy to add the new message in the corresponding new column.
Some guidelines to follow when adding new messages:
The parameter names are written within {{}} (double curly brackets).
The content inside these curly brackets should be written in English even when writing messages for any new language.
The URL shortening service is used to shorten long URLs. There are scenarios when we want to avoid sending very long URLs to the user via SMS, Whatsapp etc. This service compresses the URL.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Compress long URLs.
Converted short URLs contain id, which is used by this service to identify and get longer URLs.
Environmental Variable | Description |
---|
Deploy the latest version of the URL Shortening service.
Receive long URLs and converts them to shorter URLs. Shortened URLs contain URLs to the endpoint mentioned next. When a user clicks on shortened URL, the user is redirected to a long URL.
The shortened URLs contain the path to this endpoint. The service uses the id used in the last endpoint to get a long URL. In response, the user is redirected to the long URL.
This section provides technical details about business service setup, configuration, deployment, and API integration.
The is a revamped version of the , which provides functionality to the user to access PGR module services like file complaints, track complaints, notifications from whats app, It allows the user to view receipts and pay the bills for Property, Trade Licence, FireNOC, Water and Sewerage and BPA service module.
File PGR complaint
Track PGR complaint
Support images when filing complaints
Notifications to citizens when an employee performs any action on the complaint
Allow users to search and pay bills of different modules.
Allow users to search and view receipts of different modules.
Allow users to change the language of their choice to have a better experience.
Put user interactions on an elastic search for Telemetry.
The XState chatbot can be integrated with any other module to improve the ease of search and view bills/past payment receipts and to improve speed and convenience for bill payment. It can be integrated with the PGR module for easiness of creation and tracking of the complaint.
Increase in convenience and ease of making the bill payment.
Increase in no. of users opting for online payment.
Improvement in demand collection efficiency
Creating an additional channel for payment.
Remove dependency on mobile/web apps or counters.
Whatsapp provider is a third-party service that works in the middle of a user's WhatsApp client and XState-Chatbot server. All messages coming/going to/from the user pass through the WhatsApp provider. Chatbot calls WhatsApp provider to send messages to the user. When a user responds with any WhatsApp message the WhatsApp provider calls Chatbot service’s configured endpoint with details ex:- the user sent message, sender’s number etc.
If any new WhatsApp provider is to be used with a chatbot, code must be written to convert the provider’s incoming messages to the format that the chatbot understands and also final output from the chatbot should be converted to the WhatsApp provider’s API request format.
Currently, the XState-Chatbot service is using ValueFirst as the WhatsApp Provider. This will require provider-specific environment variables to be configured. If the provider changes then, all these environment variables will also change. A few of those environment variables are stored as secrets, so these values need to be configured in env-secrets.yaml.
As this is a revamped version of the chatbot service, all of the secrets should already be present. There is no need to create new secrets.
Configuration of PGR version in chatbot
pgrVersion
pgrUpdateTopic
To configure PGR v2 in XState chatbot then pgrVersion should be ‘v2' and pgrUpdateTopic should be 'update-pgr-request’.
Configuration of city and locality search with nlp-search engine
Adding Information Image in PGR complaint creation and Open search information image
To configure the filestoreid for an informational image follow the steps mentioned below
Download the images from the section Information Images for PGR and Open Search
For example:
a) if supportedLocales: process.env.SUPPORTED_LOCALES || 'en_IN,hi_IN'
then valuefirst-notification-resolved-templateid: "12345,6789"
b) if supportedLocales: process.env.SUPPORTED_LOCALES || 'hi_IN,en_IN'
then valuefirst-notification-resolved-templateid: "6789,12345"
(Note: Both lists should not be empty - they must contain at least one element)
Template messages with buttons are maintained in the same way as described in the previous section (Configuration of push notification template messages)
There are two types of button message
Quick Reply
Call To Action
The Value First document below provides more details.
Configuration of module for Bill payment and Receipt search
For example:
If the applicable modules are defined in the variable - bill-supported-modules: "WS, PT, TL" -
the defined modules will appear in the bill payment and receipt search. In the given example, the modules Water and Sewerage, Property Tax, and Trade license will appear for bill payment and receipt search.
Configuration of Telemetry File
Cron job mdms entry:
Information Images for PGR and Open Search
One of the applications in the DIGIT core group of services aims to reduce the time spent by developers on writing codes to store and fetch master data ( primary data needed for module functionality ) which doesn’t have any business logic associated with them. Instead of writing APIs, and creating tables in every different service to store and retrieve data that is seldom changed MDMS service keeps them at a single location for all modules and provides data on will with the help of no more than three lines of configuration.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge of how to operate JSON data would be an added advantage to understanding the service.
Adds master data for usage without the need to create master data APIs in every module.
Reads data from GIT directly with no dependency on any database services.
Environment Variables | Description |
---|
Deploy the latest version of Mdms-service
Add conf path for the file location
Add master config JSON path
The MDMS service provides ease of access to master data for any service.
No time spent writing repetitive codes with no business logic.
To integrate, host of egov-mdms-service should be overwritten in the helm chart
egov-mdms-service/v1/_search should be added as the search endpoint for searching master data.
Mdms client from eGov snapshots should be added as mvn entity in pom.xml for ease of access since it provides mdms request pojos.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Environmental Variables | Description |
---|---|
Description | Link |
---|---|
Description | Link |
---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Property | Value | Remarks |
---|---|---|
Description | Link |
---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Additional gateways can be added by implementing the interface. No changes are required to the core packages.
Property | Remarks |
---|
Title | Link |
---|
Title | Link |
---|
All content on this page by is licensed under a .
Refer to billing-service config for MDMS data. The amendment makes use of the same data set.
Description | Link |
---|
All content on this page by is licensed under a .
Refer to the MDMS data configuration here.
Property | Value | Remarks |
---|
- Refer to the integration details.
Description | Link |
---|
Description | Link |
---|
All content on this page by is licensed under a .
Variable | Path | Description |
---|
Description | Link |
---|
Description | Link |
---|
Tax Head | Amount | Order | Full Payment (2000) | Partial Payment (1500) | Partial Payment (750) | Partial Payment With Rebate (500) |
---|
Tax Head | Amount | Tax Period From | Tax Period To | Order | Purpose |
---|
Tax Head | Amount | Tax Period From | Tax Period To | Order | Purpose |
---|
All content on this page by is licensed under a .
Parameter | Description |
---|---|
API | Description |
---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Add mdms configs required for nlp-engine service () and restart mdms service.
Description | Link |
---|
Description | Link |
---|
All content on this page by is licensed under a .
Description | Link |
---|
Description | Link |
---|
All content on this page by is licensed under a .
Description | Link |
---|
All content on this page by is licensed under a .
The integration of PGR with a chatbot can be enabled and disabled by making changes in this . By exporting the respective PGR service file, the PGR service feature can be sable and vice versa.
To configure the PGR module to use in Xstate-chatbot - the below variable values need to change in the as per the requirement.
To enable the fuzzy search for city and locality selection in PGR complaint flow The variable nlp-geoSearch has to be set true in the . To use the nlp-search engine with xstate chatbot, make sure that a stable build is deployed and all the mdms data are present for that particular environment. To know more about the nlp-search engine service please refer to the Reference document section.
Upload the image into filestore server. Use the upload file API from this postman collection(
For PGR information image mention the filestore id in the environment file.
For open search, the information image mentions the filestore id in the environment file.
The integration of the Bill payment and receipt search feature with the chatbot is enabled and disabled by making changes in this . The payment and receipt search feature can be enabled and vice versa by exporting the respective bill service and receipt service file.
To configure the list of modules to appear as an option for payment and receipt, Add the module business service code in the list present in the file.
Add the message bundle, validation and service code for locality searcher in and file.
Environmental Variables | Description |
---|
Add this in and mention the filename in the respective .
Title | Link |
---|
Title | Link |
---|
Title | Link |
---|
Title | Link |
---|
All content on this page by is licensed under a .
access.token.validity.in.minutes
Duration in minutes for which the authorization token is valid
refresh.token.validity.in.minutes
Duration in minutes for which the refresh token is valid
/user/oauth/token
Used to start the session by generating Auth token and refresh token from username and password using grant_type as password. The same API can be used to generate new auth token from refresh token by using grant_type as refresh_token and sending the refresh token with key refresh_token
/user/_logout
This API is used to end the session. The access token and refresh token will become invalid once this API is called. Auth token is sent as param in the API call
WHATSAPP_PROVIDER
The provider through which WhatsApp messages are sent & received. An adapter for ValueFirst is written. If there is any new provider a separate adapter will have to be implemented.
A default console
adapter is provided for developers to test the chatbot locally.
REPO_PROVIDER
The database used to store the chat state. Currently, an adapter for PostgreSQL is provided.
An InMemory
adapter is provided to test the chatbot locally
SERVICE_PROVIDER
If it’s value is configured to be eGov, it will call the backend rainmaker services. If the value is configured as Dummy, dummy data would be used rather than fetching data from APIs.
Dummy option is provided for initial dialog development, and is only to be used locally.
SUPPORTED_LOCALES
A list of comma-separated locales supported by the chatbot.
API Contract
egov.user.search.default.size
10
default search record number limit
citizen.login.password.otp.enabled
true
whether citizen login otp based
employee.login.password.otp.enabled
false
whether employee login otp based
citizen.login.password.otp.fixed.value
123456
fixed otp for citizen
citizen.login.password.otp.fixed.enabled
false
allow fixed otp for citizen
otp.validation.register.mandatory
true
whether otp compulsory for registration
access.token.validity.in.minutes
10080
validity time of access token
refresh.token.validity.in.minutes
20160
validity time of refresh token
default.password.expiry.in.days
90
expiry date of a password
account.unlock.cool.down.period.minutes
60
unlock time
max.invalid.login.attempts.period.minutes
30
window size for counting attempts for lock
max.invalid.login.attempts
5
max failed login attempts before account is locked
egov.state.level.tenant.id
pb
/citizen/_create
/users/_createnovalidate
/_search
/v1/_search
/_details
/users/_updatenovalidate
/profile/_update
/password/_update
/password/nologin/_update
/_logout
/user/oauth/token
| Contains the module name of mdms required for nlp-engine. |
| Contains the file name of mdms master file which contains the city names in various locale. |
| Contains the file name of mdms master file which contains the tenantid of the cities present in |
| Contains the state level tenantid |
axis.active | Bollean lag to set the payment gateway active/inactive |
axis.currency | Currency representation for merchant, default(INR) |
axis.merchant.id | Payment merchant Id |
axis.merchant.secret.key | Secret key for payment merchant |
axis.merchant.user | User name to access the payment merchant for transaction |
axis.merchant.pwd | Password of the user tp access payment merchant |
axis.merchant.access.code | Access code |
axis.merchant.vpc.command.pay | Pay command |
axis.merchant.vpc.command.status | commans status |
axis.url.debit | Url for making payment |
axis.url.status | URL to get the status of the transaction |
| If set to true will apportion of the negative amount first irrespective of tax head order |
host.name | Host name to append in short URL |
db.persistance.enabled | The boolean flag to store the short URL in database when flag is set as TRUE. |
| The mobile number to be used on server |
| Username for configured number for sending messages to user through whatsapp provider API calls |
| Password for configured number for sending messages to user through whatsapp provider API calls |
| Maps API key to access geocoding feature |
| Contains state level tenantid value |
| This variable contains the list supported language in chatbot. If there is a need to add new language in chatbot, then its respective locale need to add in this list. |
| Contains PGR version value to use (i.e v1 or v2) |
| Depends on PGR version respective PGR update kafka topic name should mention here. Example: If |
| Limit for showing maximum number of bills on search. |
| Limit for showing maximum number of receipts on search. |
| Limit for showing maximum number of complaints on search. |
| Contains the list of modules to be use for bill payment and receipts search. |
| Contains the filestoreid of informational image, which shows how to share the user current location. |
| Contains the filestoreid of open search informational image, which shows how to use open search pay feature for bill payment |
| This variable contain fixed value of login password and otp. This value has to configured in env-secrets.yaml. |
| Boolean flag to enable and disable city / locality nlp search |
egov.mdms.conf.path | The default value of folder where master data files are stored |
masters.config.url | The default value of the file URL which contains master-config values |
Id-Gen service | **** |
url-shortening |
MDMS |
Pt_tax | 1000 | 6 | 1000 | 1000 | 750 | 750 |
AdjustedAmt | 1000 | -250 | -750 | -750 |
RemainingAMTfromPayableAMT | 0 | 0 | 0 | 0 |
Penality | 500 | 5 | 500 | 500 |
AdjustedAmt | 500 | -500 |
RemainingAMTfromPayableAMT | 1000 | 250 |
Interest | 500 | 4 | 500 | 500 |
AdjustedAmt | 500 | -500 |
RemainingAMTfromPayableAMT | 1500 | 750 |
Cess | 500 | 3 | 500 | 500 |
AdjustedAmt | 500 | -500 |
RemainingAMTfromPayableAMT | 2000 | 1250 |
Exm | -250 | 1 | -250 | -250 |
AdjustedAmt | -250 | 250 |
RemainingAMTfromPayableAMT | 2250 | 1750 |
Rebate | -250 | 2 | -250 | -250 |
AdjustedAmt | -250 | 250 |
RemainingAMTfromPayableAMT | 2500 | 750 |
Pt_tax | 1000 | 2014 | 2015 | 6 | Current |
AdjustedAmt | 0 |
Penality | 500 | 2014 | 2015 | 5 | Current |
AdjustedAmt | 0 |
Interest | 500 | 2014 | 2015 | 4 | Current |
AdjustedAmt | 0 |
Cess | 500 | 2014 | 2015 | 3 | Current |
AdjustedAmt | 0 |
Exm | -250 | 2014 | 2015 | 1 | Current |
AdjustedAmt | 0 |
Pt_tax | 1000 | 2014 | 2015 | 6 | Arrear |
AdjustedAmt | 0 |
Pt_tax | 1500 | 2015 | 2016 | 6 | Current |
AdjustedAmt | 0 |
Penalty | 600 | 2014 | 2015 | 5 | Arrear |
AdjustedAmt | 0 |
Penalty | 500 | 2015 | 2016 | 5 | Current |
AdjustedAmt | 0 |
Interest | 500 | 2014 | 4 | Arrear |
AdjustedAmt | 0 |
Cess | 500 | 2014 | 3 | Arrear |
AdjustedAmt | 0 |
Exm | -250 | 2014 | 1 | Arrear |
AdjustedAmt | 0 |
Details will be updated soon...
DIGIT offers key municipal services such as Public Grievance & Redressal, Trade License, Water & Sewerage, Property Tax, Fire NOC, and Building Plan Approval.
The inbox service is an aggregation service which aggregates data of municipal services and workflow based on given complex search criteria and returns applications and workflow data in paginated views. The service also returns the total count matching the search criteria.
This service allows searching of both the module objects as well as processInstance
(Workflow record) based on the given criteria for any of the municipal services. It uses a module-specific configuration which is stored in application.properties as a key value map, where the key is the businessService name while the value is the configuration map. A sample configuration is attached below -
Here, the key of the config map is the PT module business service for which the inbox is to be configured. The search definition details are specified below -
searchPath
- Points to the search URL of the municipal module
dataRoot
- This is the search response key that we get from module search, e.g. in the property module, the search response returns response objects inside the “Properties” key.
applNosParam
- This is the parameter that calls the workflow search once the module objects are retrieved based on the search. This parameter is the field that joins the module table with the workflow process instance table, e.g. in the case of the Property module it is “acknowldgementNumber”.
businessIdProperty
- This is the parameter with which we search module objects in case of empty moduleSearchCriteria
by performing the workflow search first. Again, this parameter is the field that joins the module table and workflow process instance table, e.g. in the case of the Property module it is “acknowldgementNumber”.
applsStatusParam
- This is the application status field name for the module used for the search, e.g. in the case of the Property module, it is “status”.
To provide pagination and total count across multiple modules, the inbox service is integrated with the searcher. The searcher provides the list of ids and the total count of applications. The inbox service processes the count and the results are returned to the API. The sample configuration link for PT and TL modules is given below:
NLP Chatbot |
/nlp-engine/fuzzy/city |
/nlp-engine/fuzzy/locality |
Swagger API Contract |
/pg-service/transaction/v1/_create |
/pg-service/transaction/v1/_update |
/pg-service/transaction/v1/_search |
/pg-service/gateway/v1/_search |
Collection Service |
Billing Service |
API Swagger Documentation |
/apportion-service/v2/bill/_apportion |
/apportion-service/v2/demand/_apportion |
Swagger API Contract |
Local Setup |
/amendment/_create, _update |
Chatbot Message Localisation |
nlp-search engine |
/xstate-chatbot/message |
/xstate-chatbot/reminder |
/xstate-chatbot/status |
egov-mdms sample data |
master-config.json |
egov-mdms-service/v1/_search |
| true/false | By setting this property true, show you the search result of receipt in a bucket(page) which contains a certain number of records. |
| TRUE/FALSE | Make module name in URI path mandatory |
| Certain number (say 30) | Give the 30 records at a time and next 30 results are in the next page. |
| true/false | By setting this property true, enabling the creation of user with receipt creation |
| This property is used for creation of receipt number using ID-GEN service |
| true/false | If servicebased is set to false, use default state level format for the format of receipt number and if it is set to true the format for the receipt number has to be mentioned in MDMS |
| [cy:MM]/[fy:yyyy-yy]/[SEQ_COLL_RCPT_NUM] | Default state level format for the receipt number. |
| true/false | By setting this property true, show you the search result of payment records in a bucket(page) which contains a certain number of records. |
egov.collection.payment-create | The kafka topic on which the record has to push/pull when payment is created. |
egov.collection.payment-cancel | The kafka topic on which the record has to push/pull when payment is cancelled. |
egov.collection.payment-update | The kafka topic on which the record has to push/pull when payment is updated. |
Billing-service |
Id-Gen service |
url-shortening |
MDMS |
/payments/_create |
/payments/_update |
/payments/_workflow |
bs.businesscode.demand.updateurl | { | Each module’s application calculator should provide its own update URL. if not present then a new bill will be generated without making any changes to the demand. |
bs.bill.billnumber.format | BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}] | IdGen format for the bill number |
bs.amendment.idbs.bill.billnumber.format | BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}] |
is.amendment.workflow.enabled | true/false | enable disable workflow of bill amendment |
/demand/_create, _update, _search |
/bill/_fetchbill, _search |
/amendment/_create, _update |
V2 Technical Document for UI
This release for DSS focuses on improving user experience and the ability given to the user to get deeper insights using drill-through and comparison indicators in tables.
The release includes the following features:
Breadcrumbs for better navigation
Drill through options in tables and charts
Comparison indicators in Table
In addition to the left navigation panel, the addition of breadcrumbs is also useful to provide a better sense of the current page insight. It is also very much helpful for mobile navigation. The user can navigate using the breadcrumbs by clicking on the required parent menu.
Technical Implementation Details
It Works based on the Current Route URL and previous Route URL
File Details - https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/Breadcrumbs.js
The ability provided in DSS to configure the drill through for required options in tables as well as charts. The drill through options is useful in configuring the required hierarchy of data set. This helps users to go up to 'N' levels to get deeper insights
Technical Implementation Details:
Drill down/drill through in tables, is based on the drillDownChartId and filter.
Here chart id is used for the subsequent call to fetch the next table along with the applied/selected filters.
File Details - https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/components/Charts/TableChart.js
Drill throughs in piecharts:
It is similar to the drill-down in tables. Here drill through in piecharts are based on the drillDownChartId field in the parent piechart.
File Details - https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/components/Charts/DonutChart.js
Providing better insights about the metric performances of different dimensions, a comparison indicator is required inside data tables comparing usually with a different time range (last year/last month) and what is percentage change with time.
Technical Implementation Details:
Comparison with the previous year's data in every table data uses the same request object by changing the time range to the previous year/month/week.
File Details - https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/components/Charts/TableChart.js
The following method along with parameters is used to fetch the previous year's data.
After receiving last year's data it is compared with the current year's data. The comparison is shown as insight data. The comparison logic is present in uiTable.js -
TimeFilter
The current time component is not very intuitive and user-friendly. So a new library react-date range is used to enhance the time filter.
File Details - https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/components/common/DateRange/index.js
Event Duration Graphs
Ability to generate graphs showcasing time spent between multiple events like average turnaround time, complaint assigning time, etc.
A DSS_EVENT_DURATION_GRAPH is added in the PGR config
DSS Backend Configuration Manual
DSS has two sides to it. One is the process in which the data is pooled to ElasticSearch and the other is the way it is fetched, aggregated, computed, transformed and sent across. DSS must be configurable since the entire process involves playing around with a variety of data sets. This ensures easy configuration of data sets in new scenarios.
This document explains the steps on how to define the configurations for both sides of DSS Analytics and Ingest Pipeline Services.
Ingest: Micro Service which runs as a pipeline and manages to validate, transform and enrich the incoming data and pushes the same to ElasticSearch Index
Analytics: Micro Service which is responsible for building, fetching, aggregating and computing the Data on ElasticSearch to a consumable Data Response. Which shall be later used for visualizations and graphical representations.
JOLT: JSON to JSON transformation library written in Java where the "specification" for the transform is itself a JSON document
Modules / Domain Level: These are the Services in this context. Each of the services, such as Property Tax, Trade License, Water and Sewerages are considered as Modules / Domains
Chart: Each individual graphical representation is considered as a Chart in specific. For example, a Metric of Total Collection is considered as a Chart.
Visualization: Group of different Charts is considered as a Visualization. For example, the group of Total Collection, Target Collection and Target Achieved is considered as a Metric Collection of Charts and thus it becomes a Visualization.
Discussed below are the ingest pipeline configuration details -
Topic Context Configurations
Validator Schema
JOLT Transformation Schema
Enrichment Domain Configuration
JOLT Domain Transformation Schema
Topic context configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service receives the data and passes it through the pipeline, the context and the version of the data received have to be set. This configuration is used to identify which Kafka topic consumed the data and the mapping details.
Click here for the full configuration
Validator schema is a configuration schema library from Everit. By passing the data against this schema, it ensures whether the data abides by the rules and requirements of the schema that has been defined.
Click here for an example configuration
JOLT is a JSON to JSON transformation library. In order to change the structure of the data and transform it in a generic way, JOLT has been used.
While the transformation schemas are written for each data context, the data is transformed against the schema to obtain transformed data.
Follow the slide deck for JOLT Transformations
Click here for an example configuration
This configuration defines and directs the enrichment process that the data goes through.
For example, if the data which is incoming is belonging to a collection module data, then the collection domain config is picked. And based on the specified business type the right config is picked.
In order to enhance the data of collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
Click here for an example configuration
As a part of enhancement, once the domain level object is obtained, we might not need the complete document as is in the end data product.
Only those parameters which should be or can be used for aggregation and representation are to be held and others are to be discarded.
In order to do that, we make use of JOLT again and write schemas to keep the required ones and discard the unwanted ones.
The above configuration is used to transform the data response in the enrichment layer.
Click here for an example configuration
Use case:- JOLT Transformation Schema for collection V2
JOLT transformation schema for payment-v1 has been taken as a use case to explain the context collection and context version v2. The payment records are processed/transformed with the schema. The schema supports splitting the billing records into an independent new record. So if there are 2 bill items in the collection/payment incoming data then this results in 2 collection records in turn.
Click here for an example configuration
Here: $i, the variable value that gets incremented for the number of records of paymentDetails
$j, the variable value that gets incremented for the number of records of bill details.
Note: For Kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
Below is the list of configurations
Chart API Configuration
Master Dashboard Configuration
Role Dashboard Mappings Configuration
Each Visualization has its own properties. Each Visualization comes from different data sources (Sometimes it is a combination of different data sources)
In order to configure each visualization and its properties, we have Chart API Configuration Document.
In this, Visualization Code, which happens to be the key, will be having its properties configured as a part of the configuration and are easily changeable.
Click here for an example configuration
Master dashboard configuration defines the dashboards that are to be painted on the screen.
It includes all the visualizations, their groups, the charts and even their dimensions in terms of height and width.
Click here for an example configuration
Role dashboard mapping ensures that each role is mapped against the dashboards that they are authorized to see.
Click here for an example configuration
To add a new role, modify the RoleDashboardMappingsConf.json (roles node) configuration file as given below.
Note: Any number of roles & dashboards can be added
Below as in Figure 9 is a sample to add a new role object, a new dashboard object.
To add a new dashboard, modify the MasterDashboardConfig.json (dashboards node) as shown below in Figure 10.
Note: dashboards array add a new dashboard as given below
To add new visualisations, modify the MasterDashboardConfig.json (vizArray node) as shown in Figure 11.
Note: vizArray is used to hold multiple visualizations.
To add a new chart, chartApiConf.json has to be modified as shown below. A new chartid has to be added with the chart node object.
Metric chart Sample as shown in Figure 12.
Pie chart Sample as shown in Figure 13.
Line chart Sample as shown in Figure 14.
Table chart Sample: This chart is of 2 types - table and xtable.table (as shown in Figure 15.) Type allows the addition of aggregated fields as available in the query keys. To extract the values based on the key, aggegationPaths have to be added along with their data type as in pathDataTypeMapping.
xtable (as shown in Figure 16) type allows the addition of multiple computed fields with the aggregated fields added dynamically.
To add multiple computed columns, define the following params within computedFields []
actionName - (IComputedField<T> interface),
fields - [] names as existing in the query key,
newField - name to appear for the computation
https://github.com/egovernments/configs/blob/master/egov-dss-dashboards/dashboard-analytics/ChartApiConfig.json for the full configuration in detail.
Steps to create charts and visualise are:
Create/Add a chart in chartApiConf.json
Add a visualization for the existing dashboard in MasterDashboardConfig.json as defined above.
Or in order to create/add a new dashboard create the dashboard in MasterDashboardConfig.json and create a role in RoleDashboardConfig.json
Configuration Changes For DrillThroughs:
Example - drill through in ward table in the property tax dashboard.
wardDrillDown is the visualization code for PT Drill Down. The 'kind' attribute shows the type of visualization code. Apart from two things all the attributes are common.
Example - Drill through in the ComplaintList table in the PGR Dashboard.
complaintDrillDown is the visualization code for PGR Drill Down.
The above complaintDrillDown visualization code is called in the drill chart parameter.
A decision support system (DSS) is a composite tool that collects, organizes and analyzes business data to facilitate quality decision-making for management, operations and planning. A well-designed DSS aids decision makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives and business models. DSS analysis helps organizations identify and solve problems, and make decisions.
Code Git Repos: https://github.com/egovernments/frontend/tree/master/web/dss-dashboard
State-Level Admin
Commissioner
Domain-Level Employee
There are three types of dashboards -
Home page (refer figure 1)
Overview page (refer figure 2)
Module level dashboard (refer figure 3)
The home page contains multiple cards, each card is clickable.
There are two types of cards, i.e, the overview card and the module-level card.
The overview and the module level cards are differentiated by vizType,
Overview card: Clicking on the overview card navigates to the overview page. vizType for overview is a collection.
Module Level card: Clicking on the module level card navigates to the module level dashboard. vizType is a module (i.e Property Tax, Trade License etc).
Request Payload for dashboardConfig
auth-token: authenticate the request and it fetches from a local storage key called “Employee.token”
DashboardConfig API Response
roleName: the type of user.
Visualisations: The key contains all configurations for displaying the visualisation like rows with charts etc please refer to figure 1.3.
In Figure 1.3, vizType key will define the module UI like
Collection chart & module chart refer figure 1
In dashboardConfig response, the visualisation key contains all rows & charts details (refer figure 1.3). Each row contains the visual details like name, vizType, noUnit, isCollapsible, charts etc (refer figure 1.3).
name - name of visualisation
vizType - type of visualisation like COLLECTION, MODULE, METRIC-COLLECTION, PERFORMING-METRIC, CHART
COLLECTION - The home page, contains the collection data (refer figure 1).
MODULE - The home page, contains the module-level data (refer figure 1).
METRIC-COLLECTION - In Overview/Module Level Page, contains the collection data (refer figure 2.1).
PERFORMING-METRIC -In Overview/Module Level Page, contains the top/bottom performing data (refer figure 2.2).
CHART - In Overview/Module Level Page, contains the below visualisations (refer figure 2.3 to figure 2.7).
PIE CHART (refer figure 2.3)
LINE CHART (refer figure 2.4)
BAR CHART (refer figure 2.5)
HORIZONTAL BAR CHART (refer figure 2.6)
TABLE CHART (refer figure 2.7)
Visualisations
ULB dashboard contain different filters, i.e ULBs and Wards/Blocks. The data to the filters are loaded from MDMS API below - https://dev.digit.org/egov-mdms-service/v1/_search
Each ULB dashboard, overview dashboard and module-level pages contain different filters and are identified by roleName in configs API.
The Wards/Blocks filter is a dependable filter, which gets loaded on ULB selection.
In the ULB dashboard, the on-page ULB filter is applied across all the charts and for the performance chart, the default ULB filter is not applied.
Overview and all module level pages has a ULB dashboard.
GLOBAL Filters (refer to figure 2.8)
Filters are loaded from the MDMS API - https://dev.digit.org/egov-mdms-service/v1/_search. Filters are loaded on the basis of roleName.
Admin role: On the Module level page, Date, DDR and ULB filter are loaded.
On the Overview level page, Date, DDR, ULB and Service filter are loaded.
Commissioner role: On the Module level page, Date, ULB and Wards/Blocks filters are loaded.
On the Overview page, Date, ULB and Service filters are loaded.
Denomination filter: The Denomination filter has three options to display the amount and number in a particular format.
Crore
Lack
Unit
The denomination filter is not applied to the percentage and text (refer to figure 2.10). The type of data is identified by a symbol in the plots of charts API.
Custom Date Filter
If duration < 15 days, it displays data day-wise
If duration <= 30 days, it displays data week-wise
If duration >30, it displays data month-wise
Tabs
Currently, the dashboard contains two types of tabs -
Revenue (refer figure: 4.1)
Service (refer figure: 4.1)
Tabs are identified by name in visualisations of config API.
Table Chart with drill-down
Table chart visualisations have normal material UI data table features like search, sort etc.
In table response, if filter key & drillDownChartId contain any value users can drill-down the table.
Cards
Each card header is localised and has an info icon with a tooltip option that displays the header and can display a description.
The number of cards in a row and in a page is driven by the backend. The backend provides the row number to each card where it should be displayed.
Card contains option icon that enables users to either download images and or share images.
Image download and share user id from vizArray in order to differentiate each card in a page.
Download and Share (refer to figure 2.9)
Download offers two options - to download data as an image or a PDF.
Share: Share creates the Image/PDF and uploads it S3 using below API and returns file id - https://mseva-uat.lgpunjab.gov.in/filestore/v1/files
The file Id is fetched using the API - https://mseva-uat.lgpunjab.gov.in/filestore/v1/files/url
Each S3 image is shortened using the API - https://mseva-uat.lgpunjab.gov.in/egov-url-shortening/shortener
Configurations
Github link for config: https://github.com/egovernments/frontend/blob/master/web/dss-dashboard/src/config/configs.js
BASE URL: End point of REST API for dashboard.
FILE Upload: End point of REST API for file upload.
FETCH FILE: End point of REST API for file fetch.
MDMS: End point of REST API for fetch MDMS Data.
SHORTEN URL: End point of REST API for Shorten URL, which is used for share via Email / What's app.
CHART COLOR CODE: Color code object for all charts.
MODULE LEVEL: for global filters, which contains services name & filter key.
SERVICES: for global filter, service filter.
Upload Localisation Keys
code: pre-defined key for back-end.
message: message contains the value for the key.
module: rainmaker-dss
locale: contains locale data
for more details eGov team to be documented
Module name: rainmaker-dss
NPM Module Used - https://github.com/egovernments/DIGIT-OSS/blob/master/frontend/mono-ui/web/dss-dashboard/package.json
Steps to setup DSS in Local
Step 1: Run as independent, switch to dss-dashboard folder
Step 2: Get the below details from the environment website and update the localstorage in the browser.
Employee.tenant-id Employee.user-info Employee.token Employee.module Employee.locale localization_en_IN locale
Step 3: Run Yarn install and yarn start to start working on dss in local setup.
DSS Features Enhancements V2: DSS Features Enhancements V2 Technical Document for UI
Migration details from v1 to v2
According to the new collection service, which follows the payment structure for storing information about payments and payment details, it is necessary to migrate the old collection structure into the new payment structure.
In the old collection service, for every transaction, the receipt number is generated on the bill detail level. Since the bill contains multiple bill details each transaction is mapped to multiple receipt numbers. So after payment of a single bill, multiple receipt numbers are generated. The mapping of the transactions to the receipt number changed in the new collection service.
In the new collection service, the receipt number is generated at the bill level. For each bill transaction, one receipt number is generated. So every bill for a consumer code and business service has one receipt number.
The records from tables egcl_receiptheader, egcl_receiptdetails, egcl_instrument, egcl_instrumentheader need to be transferred into tables egcl_payment, egcl_paymentdetail, egcl_bill, egcl_billdetial, egcl_billaccountdetail.
For smooth data transactions, the record from the old receipt is mapped according to the payment structure. The new payment response can be formed with receipt data.
The table below provides the mapping between receipt and payment structure with some remarks.
After the creation of the payment response with receipt data, it is pushed into the Kafka topic “egov.collection.migration-batch”. The persister inserts the payment data into tables - egcl_payment, egcl_paymentdetail, egcl_bill, egcl_billdetial, egcl_billaccountdetail.
Indexer config for the legacy data index and new payments.
https://github.com/egovernments/configs/blob/master/egov-indexer/payment-indexer.yml
persister config -
These need to get promoted before initiating the migration process. Migration happens through an API call, add role-actions based on your requirement. Otherwise, port-forwarding will work.
Endpoint: /collection-services/payments/_migrate?batchSize=100&offset= Body: { "RequestInfo": { "apiId": "Rainmaker", "action": "", "did": 1, "key": "", "msgId": "20170310130900|en_IN", "ts": 0, "ver": ".01", "authToken": "a6ad2a1b-821c-4688-a70e-4322f6c34e54" }
In case of any failure and restarting migration, take the value of offset and tenantId printed in the logs and resume the migration process.
/collection-services/payments/_migrate?batchSize=100&offset=200&tenantId='pb.tenantId'
Collection-service build:- collection-services-db:9-COLLECTION_MIGRATION-e9701c4
DIGIT is India's largest open-source platform for Urban Governance. It provides API-based access to government functions enabling the government to provide facilities via integration with relevant service players. This document provides the details of how system integrators enable bill collection facilities to customers using DIGIT as the governance platform. It outlines the integration approach with Billing and Collections services to enable fetching bill dues to citizens and recording their payments into the system.
DIGIT is completely API driven and allows for data exchange with disparate systems using REST API calls. Most functional APIs are protected resources that can be accessed after proper authentication with the platform. The platform also checks for the right level of access for given credentials. A bill collection flow -
Authenticate with DIGIT
Get citizen bill using a service-specific query
Record the payment details against the bill
Optional - Get payment API to fetch the receipt details
The in-field team of the system integrator makes the calls to the integrator's own system (or a standard system like BBPS). Integration with DIGIT follows a server-to-server approach where the backend system of the integrator makes these calls to the DIGIT platform as per requirement. The diagram below depicts the high-level flow of calls between on-field devices like PoS to the integrator backend (Integrator System) and from the backend of the DIGIT integrator to DIGIT (DIGIT Platform).
Note: The process of calling payment API results in a receipt creation.
DIGIT uses Swagger 2.0 as its API standard and all its APIs are documented in Swagger. Wherever needed this document provides a link to our API documentation online. An example of typical request/response snippets necessary for integration is provided below in the respective sections.
DIGIT is a multi-tenanted system - hence all APIs in DIGIT except tenantid are passed either in the query param or RequestBody (Please refer to detailed API documentation as indicated in sections below). The tenantid represents the modular operating unit for the operation of an API, e.g. in a municipal governance use case. A tenantid represents one ULB. Your platform contact will help you access the configured list for your use case.
Authentication API also expects tenantid (your platform contact will help you identify the one to use). Based on the role as an integrator the OAUTH token in response can be used for unit/ULB level tenants in subsequent API calls (meaning you may not need one authentication per unit/ULB level tenant).
Authentication
To ensure data privacy and security, transactional APIs in DIGIT are protected under authentication. System integrators are requested to contact the respective state authority to get the necessary OAUTH tokens required to access the APIs.
Note: Apart from the userid/password, the system may enforce IP-based access control in which case the integrator may be required to share the IP or range of IPs from which the request will originate.
Use the API below to generate the access token based on the credentials provided. Given below is an example of the request and response. The OAuth token to be used from the response is highlighted in bold.
Request Snippet
Response Snippet
2. Fetching Bill
DIGIT allows the integrators to fetch the bills for citizens using the consumer number of the respective service (e.g. Water charges, Property Service, Trade License).
Note: Different services may have different notions of consumer number, e.g. for Water Charges consumer number signifies the "Connection number" while for Property it is the "Property Id".
For some services, DIGIT also provides the facility to fetch bills by mobile number.
Note: A bill search by mobile number may return multiple bills across services and may not return bills from services that do not support mobile-number-based search.
To support the partial payment use case each bill in the response of the fetch bill API indicates whether it allows partial payment and if yes, the minimum amount to be paid.
To fetch a bill from DIGIT, make sure that the OAuth token is generated as per the Authentication section above. Post that use the following API to fetch the bill -
Choose Billing Service from the dropdown.
Go to the Bill section of BillingService.
Go to the Bill tab.
3. Make Payment
Once the bill is fetched from the DIGIT system, the system integrator is expected to relay it back to the Field Device. The integrator is expected to Initiate and collect the payment based on government preference indicated in the bill (can it be partially paid and if so the minimum amount etc.) and citizen's preference of payment instrument etc.
Once the payment is successfully done in the integrator's system, the integrator is expected to register the payment in DIGIT using the Payment Create API.
Note: A bill is considered unpaid/partially paid by DIGIT till appropriate receipts are created using this API - which means that a subsequent fetch of the bill, till this API is called, returns the original bill
DIGIT expects a receipt (result of calling payment API) to be created against the bill number returned in the fetch bill API.
Note: A receipt needs to be created for each bill. Therefore, if a total payment represents multiple bills - one receipt creation per bill is expected (DIGIT supports multiple receipt creation in a single call).
To create a receipt in DIGIT, make sure that the OAuth token is generated as per the Authentication section above. Post that use the following API to create the receipt -
Choose Collection Service from the dropdown.
Go to Payment.
Go to Make Payment.
This specifies the migration steps which are specific to the payment index.
Add index name dss-payment_v2 as below:
In Kibana dev tools, apply the below command
Note: This name should be the value present in ingest es.index.namemapping.json24 May 2021, 11:15 AM
Ingest pipeline application properties contain es.direct.push supposed to be set true for testing.
Property Name | Value | Description |
---|---|---|
Name | Description |
---|---|
Note: After migration ensures dss-payment_v2 data has been populated and is available.
In Kibana dev tools verify using the below command
Objective: Reap Benefit system is one of the vendors that provide the chatbot services using the turn as backend services to communicate with citizen through chatbot. As part of the requirement, we need to create a complaint in digit platform when ever citizens raise the complaint through Reap Benefit chatbot.
The turn-io-adapter service is a wrapper to transform Reap Benefit request format to DIGIT PGR request format. This service has transform API that constructs the required PGR request from the request message sent from the Reap Benefit system. Reap Benefit system consumes the tranform API to communicate with the DIGIT PGR module.
In this process, once a complaint is created it sends a WhatsApp message to the citizen with a track link. Whenever some action is taken by ULB employees on complaint, a WhatsApp message is sent to citizen.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Rainmaker-PGR service is running
Complaints are generated on the DIGIT platform using the Reap Benefit system chatbot.
Messages are sent to citizen through WhatsApp when employees perform some action on the complaint.
Deploy the following builds
rainmaker-pgr-db:v1.1.3-bb2961cf-13
turn-io-adapter:v1.1.3-bb2961cf-19
egov-searcher:v1.1.3-d43c421c-5
nlp-engine:v1.0.0-c3889d14-10
Note: Please refer to the following url for nlp-engine technical documentation - NLP Engine Service
Frontend commits
1) turn-io-adapter: "http://turn-io-adapter.egov:8080/" (In service host configuration)
2) Add /turn-io-adapter/_transform in egov-mixed-mode-endpoints-whitelist configuration
3) Once you are done with 2nd step restart zuul pod
Add name filed in complaint category master in PGR. Link for the data -
Push the localisation data for all the locality data with module as rainmaker-chatbot. Sample localisation object -
{ "code": "SC1", "message": "Azad Nagar - WARD_1", "module": "rainmaker-chatbot", "locale": "en_IN" }
NA
This is the samplerequest for _transform api to create a complaint
Turn-io-adapter is integrated with Rainmaker-pgr application. Turn-io-adapter application internally invokes the rainmaker-pgr service to generate complaints.
Turn-Io-adapter application to call turn-io-adapter/_transform
to generate the complaint and takes the data from the PGR.
A decision support system (DSS) is a composite tool that collects, organizes and analyzes business data to facilitate quality decision-making for management, operations and planning. A well-designed DSS aids decision-makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives and business models. DSS analysis helps organizations identify and solve problems, and make decisions.
The Swagger API for the backend is below
Swagger API for ingest
The target upload file template is given below -
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
````
````
``
"PT":"
API | Action ID | Roles |
---|---|---|
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Parameter Name | Description |
---|---|
Parameter Name | Description |
---|---|
Parameter Name | Description |
---|---|
Parameter Name | Description |
---|---|
Parameter Name | Description |
---|---|
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Field From Payments | Field from Receipts | Remark |
---|---|---|
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
__All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
/localization/messages/v1/_search
1531
SUPERUSER,EMPLOYEE,CITIZEN,GRO,DGRO,
/egov-mdms-service/v1/_search
954
LOA_CREATOR,SUPERUSER,WO_CREATOR,AE_CREATOR,WORKS_MASTER_CREATOR,
/dashboard-analytics/dashboard/getDashboardConfig/propertytax
1892
STADMIN
/dashboard-analytics/dashboard/getDashboardConfig/home
1889
STADMIN
/dashboard-analytics/dashboard/getDashboardConfig/tradelicense
1893
STADMIN
/dashboard-analytics/dashboard/getDashboardConfig/pgr
1894
STADMIN
/dashboard-analytics/dashboard/getDashboardConfig/ws
2010
STADMIN
/dashboard-analytics/dashboard/getChartV2
1890
STADMIN, EMPLOYEE
topic
Holds the name of the Kafka Topic on which the data is being received
dataContext
Context Name which needs to be set for further actions in the pipeline
dataContextVersion
Version of the Data Structure is set here as there might be different structured data at a different point in time
id
Unique Identifier for the Configuration within the configuration document
businessType
This defines as in which kind of Domain / Service is the data related to. Based on this business type, query and enhancements are decided
indexName
Based on Business Type, Index Name is defined as to which index has to be queried to get the enhancements done from
query
Query to execute to get the Domain Level Object is defined here.
targetReferences
sourceReference
Fields which are variables in order to get the domain level objects are defined here. The variables and where all the values has to be picked from are documented here
Key (e.g: totalApplication)
This is the Visualization Code. This key will be referred to in further visualization configurations. This is the key that will be used by the client application to indicate which visualization is needed for display.
chartName
The name of the Chart has to be used as a label on the Dashboard. The name of the Chart will be a detailed name. In this configuration, the Name of the Chart will be the code of Localization which will be used by Client Side
queries
Some visualizations are derived from a specific data source. While some others are derived from different data sources and are combined together to get a meaningful representation. The queries of aggregation which are to be used to fetch out the right data in the right aggregated format are configured here.
queries.module
The module / domain level, on which the query should be applied on. Property Tax is PT, Trade License is TL. If the query is applied across all modules, the module has to be defined as COMMON
queries.indexName
The name of the index upon which the query has to be executed is configured here.
queries.aggrQuery
The aggregation query in itself is added here. Based on the Module and the Index name specified, this query is attached to the filter part of the complete search request and then executed against that index
queries.requestQueryMap
Client Request would carry certain fields which are to be filtered. The parameters specified in the Client Request are different from the parameters in each of these indexed documents. In order to map the parameters of the request to the parameters of the ElasticSearch Document, this mapping is maintained
queries.dateRefField
Each of these modules has separate indexes. And all of them have their own date fields.
When there is a date filter applied against these visualizations, each of them has to apply it against their own date reference fields. In order to maintain what is the date field in which index, we have this configured in this configuration parameter
chartType
As there are different types of visualizations, this field defines as what is the type of chart / visualization that this data should be used to represent.
Chart types available are:
metric - this represents the aggregated amount/value for records filter by the aggregate es query
pie - this represents the aggregated data on grouping. This is can be used to represent any line graph, bar graph, pie chart or donuts
line - this graph/chart is data representation on date histograms or date groupings
perform - this chart represents groping data as performance-wise.
table - represents a form of plots and value with headers as grouped on and list of its key, values pairs.
xtable - represents an advanced feature of the table, it has additional capabilities for dynamic adding header values.
valueType
In any case of data, the values which are sent to plot might be a percentage, sometimes an amount and sometimes it is just a count. In order to represent them and differentiate the numbers from the amount from percentage, this field is used to indicate the type of value that this Visualization will be sending.
action
Some of the visualizations are not just aggregating on data source. There might be some cases where we have to do a post aggregation computation. For Example, in the case of Top 3 Performing ULBs, the Target and Total Collection is obtained and then the percentage is calculated.
In these kinds of cases, what is the action that has to be performed on that data obtained, is defined in this parameter.
documentType
The type of document upon which the query has to be executed is defined here.
drillChart
If there is a drill down on the visualization, then the code of the Drill Down Visualization is added here.
This will be used by Client Service to manage drill-downs
aggregationPaths
All the queries will be having Aggregation names in it. In order to fetch the value out of each Aggregation Responses, the name of the aggregation in the query will be an easy bet. These aggregation paths will have the names of Aggregation in it.
_comment
In order to display information on the “i” symbol of each visualization, Visualization Information is maintained in this field.
name
Name of the Dashboard which has to be displayed as Page Heading
id
Unique Identifier of the Dashboard which should be used later for Querying each of these Visualizations
isActive
Active Indicator which can be used to quickly disable a dashboard if required.
style
Style of the Dashboard. Whether it should be a linear one or a tabbed one. This information is maintained in this parameter.
visualizations
The list of visualizations that are to be displayed in the Dashboard is listed out here.
visualizations.row
The row identifier for each Visualization are mentioned here
The name of an individual visualization is added here
visualizations.vizArray
The list of Charts within the Visualization is specified in this list.
Group of Charts is given an ID to have a placement on the Dashboard. This unique identifier is maintained in this field.
Group of Charts is given a name that can be displayed on the group on Dashboard in that row.
visualizations.vizArray.dimensions
Each of these group of charts is given a dimension based on which they are placed in a specific row in a dashboard
visualizations.vizArray.vizType
As there are multiple charts grouped into one visualization, the type of Visualization needs to be specified in order to indicate to the client application what goes inside each of these visualizations and charts inside them
vizType used for any other dashboards:- metric-collection, chart, performing-metric
metric-collection:- Used to specify the type as single or group of metric chart type
2. performing-metric:- Used perform chart type
3. chart:- Used chart type for pie, donut, table, bar, horizontal bar, line
vizType used for the Home page:- collection, module
collection: used in UI style as full width
2. module: used in UI style for specific width.
visualizations.vizArray.noUnit
visualizations.vizArray.isCollapsible
visualizations.vizArray.ref
The value types of these charts are different. Some are numbers, some are amounts, some are percentage.
In the case of amounts, there is a requirement to display in Lakhs, Crores and Units. In order to indicate the client application whether to display these units or not, we have this boolean to control that
The value type is for card/visualisation collapsible as boolean values.
This object contains url (as mandatory), logoUrl (optional), type(optional).
visualizations.vizArray.charts
The list of individual charts inside a Visualization Group is maintained in this array list
Individual Chart Number Identifier to indicate the uniqueness of Charts
Name of the Chart which can be a header label for Charts within a Visualization
visualizations.vizArray.charts.code
Code of the Chart is the indicator that has to be sent to Server Side to get the data for representing the Visualization.
visualizations.vizArray.charts.chartType
Type of Chart which has to represent the data result set that is obtained is specified here
chartType:- bar, horizontalBar, line, donut, pie, metric, table
visualizations.vizArray.charts.filters
Filters that can be applied to the Visualization and what are the fields which are filterable are mentioned here.
visualizations.vizArray.charts.headers
In some cases, there are headers which can be a title or additional information for the Chart Data which gets represented. This field is kept open to accommodate the information which can be sent along with the Chart Data in itself.
roles
List of Roles that are available in the system
roles._comment
Role Description and comment on why does this role has an entry in this configuration and sums up the summary as to what are the things that are to be enabled.
roles.roleId
Unique Identifier of the Role for which Access is being given
roles.roleName
Name of the Role for which the access is being given
roles.isSuper
Boolean flag which defines whether the Role is a Super User or not
roles.orgId
Organization to which the Role belongs to
roles.dashboards
List of Dashboards that are enabled for the Role
Name of the individual Dashboard which has been enabled
Identifier of the individual Dashboard which has been enabled
Payments.Id
---
Set as UUID
Payments.tenantId
Receipt.tenantId
Payments.totalDue
---
Total due for payment is calculated by subtracting totalAmount from bill and amount from Receipt.instrument
Payments.totalAmountPaid
Receipt.instrument.amount
Payments.transactionNumber
Receipt.instrument.transactionNumber
Payments.transactionDate
Receipt.receiptDate
Payments.paymentMode
Receipt.instrument.instrumnetType.name
Payments.instrumentDate
Receipt.instrument.instrumentDate
Payments.instrumentNumber
Receipt.instrument.instrumentNumber
Payments.instrumentStatus
Receipt.instrument.instrumentStatus
Payments.ifscCode
Receipt.instrument.ifscCode
Payments.additionalDetails
Receipt.Bill.additionalDetails
Payments.paidBy
Receipt.Bill.paidBy
Payments.mobileNumber
Receipt.Bill.mobileNumber
If mobileNumber from Receipt.bill is null it has to set with some value e.g: “NA”
Note: Payments.mobileNumber should not be null
Payments.payerName
Receipt.Bill.payerName
Payments.payerAddress
Receipt.Bill.payerAddress
Payments.payerEmail
Receipt.Bill.payerEmail
Payments.payerId
Receipt.Bill.payerId
Payments.paymentStatus
--
Based on paymentMode from Payment, the paymentStatus is set.
If paymentMode is ONLINE or CARD then paymentStatus is set to DEPOSITED otherwise it is set to NEW
Payments.auditDetails.createdBy
Receipt.auditDetails.createdBy
Payments.auditDetails.createdTime
Receipt.auditDetails.createdTime
Payments.auditDetails.lastModifiedBy
Receipt.auditDetails.lastModifiedBy
Payments.auditDetails.lastModifiedTime
Receipt.auditDetails.lastModifiedTime
Payments.paymentDetails.Id
---
Set as UUID
Payments.paymentDetails.tenantId
Receipt.tenantId
Payments.paymentDetails.totalDue
---
Total due for paymentDetails is calculated by subtracting totalAmount from bill and amount from Receipt.instrument
Payments.paymentDetails.totalAmountPaid
Receipt.instrument.amount
Payments.paymentDetails.receiptNumber
Receipt.receiptNumber
Payments.paymentDetails.manualReceiptNumber
Receipt.Bill.billDetails.manualReceiptNumber
Payments.paymentDetails.manualReceiptDate
Receipt.Bill.billDetails.manualReceiptDate
Payments.paymentDetails.receiptDate
Receipt.receiptDate
Payments.paymentDetails.receiptType
Receipt.Bill.billDetails.receiptType
Payments.paymentDetails.businessService
Receipt.Bill.billDetails.businessService
Payments.paymentDetails.additionalDetail
Receipt.Bill.additionalDetail
Payments.paymentDetails.auditDetail
---
auditDetail for paymentDetail is same as payment auditDetail
Payments.paymentDetails.billId
---
Based on id in egbs_billdetail_v1 table billId is extracted,Where id in egbs_billdetail_v1 is Receipt.Bill.billDetails.billNumber
Payments.paymentDetails.bill
---
Based on the billid, tenantid and service the bill is search by calling the Billing service API and set it to Payments.paymentDetails.bill
Payments.paymentDetails.bil.billDetails.amountPaid
Receipt.instrument.amount
For each amountPaid in billDetails, its value is set from Receipt.instrument.amount
PUT dss-payment_v2
{} // add mapping file content here. mapping.json as attached below
es.direct.push
true
the transformed data will be pushed to ES index directly.
es.direct.push
false
the transformed data will be lying at egov-dss-ingest-enriched topic
Method
End Point
Body
POST
{host}/dashboard-ingest/ingest/migrate/paymentsindex-v1/v2
{"RequestInfo":{"authToken":"2ba70924-1bba-4a9b-b55d-2e9471bf3081"}}
CURL
curl -X POST https://dev.digit.org/dashboard-ingest/ingest/migrate/paymentsindex-v1/v2 -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: d83fc136-116d-265f-3b83-ea41e3d5bb57' -d '{"RequestInfo":{"authToken":"2ba70924-1bba-4a9b-b55d-2e9471bf3081"}}'
The Collection service is to serve as a revenue collection platform for all the billing systems through cash, cheque, demand draft, and swipe machines. It enables payment for all services provided by the eGov platform at a single point for the citizen and counter collection in municipal alike.
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic, etc.
The following services should be up and running:
egov-localization
egov-mdms
egov-idgen
egov-url-shortening
billing-service
Allows citizens to create a payment.
Allows employees to create the payment for the citizen indirectly.
provides facilities to capture partial and advanced payments based on configs.
allows payment cancellation to help with scenarios of bad checks and other failed payment scenarios.
Integrates with billing service for demand back-update of payment.
deploy the latest version of the collection-services docker builds.
The MDMS data configuration uses the same data updated by Billing-Service
Following are the properties in the application.properties
Collection service can be integrated with any organization or system that wants a payment system to keep track of its payments. Organizations can customize part of the application or its functionality based on their requirements.
Easy payments and tracking of payments.
Configurable functionalities according to client requirements.
Customers can create a payment using the /payments/_create.
Actors on the system can keep track of payments using /payments/_search
endpoint.
Once the payment is done but it encounters a technical issue outside of the system then it can be cancelled with /payments/_workflow.
For employees to access the payments API the respective module name should be appended after the payment API path - /payments/PT/_workflow
- here PT refers to the property module.
Port forward the collection service to the current environment where the IFSC CODE bank details data is to be migrated. Sample command - 1kubectl port-forward collection-services-76b775f976-xcbt2 8055:8080 -n egov
Import postman collection from API list which refers as /preexistpayments/_update
and runs with the same localhost to where we port forwarded using the above command.
Expected result. In the EGCL_PAYMET table where IFSCODE data is present for those records, EGCL_PAYMET.ADDITIONALDETAILS bankdetails will be updated.
Refer to the MDMS data config from here.
Property | Value | Remarks |
---|
Ex: For IFSCCODE : UCBA0003047 Response from API is updated in EGCL_PAYMET.ADDITIONALDETAILS as {"bankDetails": {"UPI": true, "BANK": "UCO Bank", "CITY": "BHIKHI", "IFSC": "UCBA0003047", "IMPS": true, "MICR": "151028452", "NEFT": true, "RTGS": true, "STATE": "PUNJAB", "SWIFT": "", "BRANCH": "BHIKHI", "CENTRE": "MANSA", "ADDRESS": "ADJOINING HP PETROL PUMP MANSA ROADDISTRICT MANSA","BANKCODE":"UCBA","DISTRICT":"MANSA","CONTACT":"+918288822548"}
Refer to the integration with details and explanation.
Description | Link |
---|
Description | Link |
---|
All content on this page by is licensed under a .
collection.receipts.search.paginate | true/false | By setting this property true, show you the search result of receipt in a bucket(page) which contains a certain number of records. |
| TRUE/FALSE | Make module name in URI path mandatory |
collection.receipts.search.default.size | Certain number (say 30) | Give the 30 records at a time and next 30 results are in the next page. |
collection.is.user.create.enabled | true/false | By setting this property true, enabling the creation of user with receipt creation |
receiptnumber.idname | This property is used for creation of receipt number using ID-GEN service |
receiptnumber.servicebased | true/false | If servicebased is set to false, use default state level format for the format of receipt number and if it is set to true the format for the receipt number has to be mentioned in MDMS |
receiptnumber.state.level.format | [cy:MM]/[fy:yyyy-yy]/[SEQ_COLL_RCPT_NUM] | Default state level format for the receipt number. |
collection.payments.search.paginate | true/false | By setting this property true, show you the search result of payment records in a bucket(page) which contains a certain number of records. |
egov.collection.payment-create | The kafka topic on which the record has to push/pull when payment is created. |
egov.collection.payment-cancel | The kafka topic on which the record has to push/pull when payment is cancelled. |
egov.collection.payment-update | The kafka topic on which the record has to push/pull when payment is updated. |
Billing-service |
Id-Gen service |
url-shortening |
MDMS |
/payments/_create |
/payments/_update |
/payments/_workflow |
/preexistpayments/_update |