Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Core Services is one of the key DIGIT components. Browse through this section to learn more about the key configuration and integration details of these core services.
One of the applications in the Digit core group of services aims to reduce the time spent by developers on writing codes to store and fetch master data ( primary data needed for module functionality ) which doesn’t have any business logic associated with them. Instead of writing APIs, creating tables in every different service to store and retrieve data that is seldom changed MDMS service keeps them at a single location for all modules and provides data on will with the help of no more than three lines of configuration.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge on how to operate JSON data would be an added advantage to understand the service.
Adds master data for usage without the need to create master data APIs in every module.
Reads data from GIT directly with no dependency on any database services.
Environment Variables
Description
egov.mdms.conf.path
The default value of folder where master data files are stored
masters.config.url
The default value of the file URL which contains master-config values
Deploy the latest version of Mdms-service
Add conf path for the file location
Add master config JSON path
The MDMS service provides ease of access to master data for any service.
No time spent writing repetitive codes with no business logic.
To integrate, host of egov-mdms-service should be overwritten in helm chart
egov-mdms-service/v1/_search should be added as the search endpoint for searching master data.
Mdms client from eGov snapshots should be added as mvn entity in pom.xml for ease of access since it provides mdms request pojos.
egov-mdms sample data
master-config.json
egov-mdms-service/v1/_search
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This section contains the configuration documents related to the DIGIT service stack.
Click on the respective service link below to find its configuration details and additional information resources.
Workflows are a series of steps that moves a process from one state to another state by actions performed by different kind of Actors - Humans, Machines, Time based events etc. to achieve a goal like onboarding an employee, or approve an application or grant a resource etc. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has workflow persister config path added in it
PSQL server is running and database is created to store workflow configuration and data
Always allow anyone with a role in the workflow state machine to view the workflow instances and comment on it
On the creation of workflow, it will appear in the inbox of all employees that have roles that can perform any state transitioning actions in this state.
Once an instance is marked to an individual employee it will appear only in that employee's inbox although point 1 will still hold true and all others participating in the workflow can still search it and act if they have necessary action available to them
If the instance is marked to a person who cannot perform any state transitioning action, they can still comment/upload and mark to anyone else.
Overall SLA: SLA for the complete processing of the application/Entity
State-level SLA: SLA for a particular state in the workflow
Environment Variables
Description
egov.wf.default.offset
The default value of offset in search
egov.wf.default.limit
The default value of limit in search
egov.wf.max.limit
The maximum number of records that are returned in search response
egov.wf.inbox.assignedonly
Boolean flag if set to true default search will return records assigned to the user only, if false it will return all the records based on the user’s role. (default search is the search call when no query params are sent and based on the RequestInfo of the call, records are returned, it’s used to show applications in employee inbox)
egov.wf.statelevel
Boolean flag set to true if a state-level workflow is required
Deploy the latest version of egov-workflow-v2 service
Add businessService persister yaml path in persister configuration
Add Role-Action mapping for BusinessService API’s
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Create businessService (workflow configuration) according to product requirements
Add Role-Action mapping for /processInstance/_search API
Add workflow persister yaml path in persister configuration
For Configuration details please refer to the links in Reference Docs
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient too and increase accountability.
Role-based workflow
An easy way of writing rule
File movement within workflow roles
To integrate, host of egov-workflow-v2 should be overwritten in helm chart
/process/_search should be added as the search endpoint for searching workflow process Instance object.
/process/_transition should be added to perform an action on an application. (It’s for internal use in modules and should not be added in Role-Action mapping)
The workflow configuration can be fetched by calling _search API to check if data can be updated or not in the current state
Title
Link
Configuring Workflows For New Product/Entity
Setting Up Workflows
API Swagger Documentation
Migration to Workflow 2.0
Title
Link
/businessservice/_create
/businessservice/_update
/businessservice/_search
/process/_transition
/process/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
User service is responsible for user data management and providing functionality to login and logout into Digit system
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
Encryption and MDMS services are running
PSQL server is running and database
Redis is running
Store, update and search user data
Provide authentication
Provide login, logout functionality into DIGIT platform
Store user data PIIs in encrypted form
Setup latest version of egov-enc-service and egov-mdms- service
Deploy the latest version of egov-user service
Add Role-Action mapping for API’s
Following application properties file in user service are configurable.
User data management and functionality to login and logout into Digit system using OTP and password.
Providing following functionality to citizen and employee type users
Employee:
User registration
Search user
Update user details
Forgot password
Change password
User role mapping(Single ULB to multiple roles)
Enable employee to login into DIGIT system based on a password.
Citizen:
Create user
Update user
Search user
User registration using OTP
OTP based login
To integrate, host of egov-user should be overwritten in the helm chart.
Use /citizen/_create and /users/_createnovalidate endpoints for creating users into the system
Use /v1/_search and /_search endpoints to search users in the system depending on various search parameters
Use /profile/_update for partial update and /users/_updatenovalidate for update
Use /password/nologin/_update for otp based password reset and /password/_update for logged in user password reset
Use /user/oauth/token for generating token, /_logoutfor logout and /_details for getting user information from his token
DIGIT is API based Platform here each API is denoting to a DIGIT resource. Access Control Service (ACS) primary job is to authorise end-user based on their roles and provide access to the DIGIT platform resources. Access control functionality basically works based on below points:
Actions: Actions are events which are performed by a user. This can be an API end-point or Frontend event. This is MDMS master
Roles: Role are assigned to the user, a user can hold multiple roles. Roles are defined in MDMS masters.
Role-Action: Role actions are mapping b/w Actions and Roles. Based on role, action mapping access control service identifies applicable action for the role.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
MDMS service is up and running
Serve the applicable actions for a user based on user role (To print menu three).
On each action which is performed by a user, access control looks at the roles for the user and validate actions mapping with the role.
Support tenant-level role-action. For instance, an employee from Amritsar can have a role of APPROVER for other ULB like Jalandhar and hence will be authorised to act as APPROVER in Jalandhar.
Deploy the latest version of Access Control Service
Deploy MDMS service to fetch the Role Action Mappings
Define the roles
Add the Actions (URL)
Add the role action mapping
(The details about the fields in the configuration can be found in the swagger contract)
Any microservice which requires authorisation can leverage the functionalities provided by access control service.
Any new microservice that is to be added in the platform won’t have to worry about authorisation. It can just add it’s role action mapping in the master data and Access Control Service will perform authorisation whenever API for the microservice is called.
To integrate with Access Control Service the role action mapping has to be configured(added) in the MDMS service.
The service needs to call /actions/_authorize API of Access Control Service to check for authorisation of any request
A core application which provides location details of the tenant for which the services are being provided.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
PSQL server is running and database is created
Knowledge of egov-mdms service
egov-mdms service is running and all the required mdms master are loaded in it
The location information is also known as boundary data of ULB
Boundary data can be of different hierarchies ADMIN, ELECTION hierarchy which is defined by the Administrators, Revenue hierarchy defined by the Revenue department.
The election hierarchy has the locations divided into several types like zone, election ward, block, street and locality. The Revenue hierarchy has the locations divided into a zone, ward, block and locality.
The model which defines the localities like zone, ward and etc is boundary object which contains information like name, lat, long, parent or children boundary if any. The boundaries come under each other in a hierarchy like a zone contains wards, ward contains blocks, a block contains locality. The order in which the boundaries are contained in each other will differ based on the tenants.
Add/Update the mdms master file which contain boundary data of ULB’s.
Add Role-Action mapping for egov-location API’s.
Deploy/Redeploy the latest version of egov-mdms service.
Fill the above environment variables in egov-location with proper values.
Deploy the latest version of egov-location service.
The boundary data has been moved to mdms from the master tables in DB. The location service fetches the JSON from mdms and parses it to the structure of boundary object as mentioned above. A sample master would look like below.
The egov-location API’s can be used by any module which needs to store the location details of the tenant.
Get the boundary details based on boundary type and hierarchy type within the tenant boundary structure.
Get the geographical boundaries by providing appropriate GeoJson.
Get the tenant list in the given latitude and longitude.
To integrate, host of egov-location should be overwritten in helm chart.
/boundarys/_search should be added as the search endpoint for searching boundary details based on tenant Id, Boundary Type, Hierarchy Type etc.
/geography/_search should be added as the search endpoint .This method handles all requests related to geographical boundaries by providing appropriate GeoJson and other associated data based on tenantId or lat/long etc.
/tenant/_search should be added as the search endpoint. This method tries to resolve a given lat, long to a corresponding tenant, provided there exists a mapping between the reverse geocoded city to tenant.
The MDMS Tenant boundary master file should be loaded in MDMS service.
In the existing version of the chatbot, for PGR complaint creation feature, the user has to select his/her city from a drop-down menu by visiting the mseva website. This significantly reduces user convenience as the user is required to constantly switch pages. To overcome the above inconvenience, nlp-engine service is used. The service has an algorithm that uses fuzzy matching and pattern recognition to recognise the city provided by the user as input. Based on the user input, the cities having the highest match ratio with the input are being returned as the output list. A list comprising all the city names in English, Punjabi and Hindi was used as a reference tool for this service.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Python.
egov-mdms service is running and all the data related to the service are added in the mdms repository.
egov-running service is running.
Provides city fuzzy search feature which returns the list of cities having the highest match ratio with the input.
City fuzzy search can support input data in English, Hindi and Punjabi language.
Provides locality fuzzy search feature which returns the list of localities having the highest match ratio with the input.
Deploy the latest version of nlp-engine service.
Whitelist the city and locality fuzzy search API’s.
The nlp-engine service is used to locate the user city and locality by using fuzzy string matching and pattern recognition.
Currently integrated into the chatbots for locating user city and locality for complaint creation use case.
This feature functionality can be extended for the other entities and can be used for a fuzzy search of those different entities.
To integrate, the host of nlp-engine service module should be overwritten in the helm chart.
/nlp-engine/fuzzy/city
should be added as the fuzzy search endpoint for a city search.
/nlp-engine/fuzzy/locality
should be added as the fuzzy search endpoint for locality search.
(Note: All the API’s are in the same postman collection therefore same link is added in each row)
The URL shortening service is used to shorten long URLs. There may be requirement when we want to avoid sending very long urls to the user via SMS, Whatsapp etc, this service compresses the URL.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Compress long URLs.
Converted short URLs contains id, which is used by this service to identify and get longer URLs.
Deploy latest version of URL Shortening service
Receive long urls and converts them to shorter urls. Shortened urls contains urls to endpoint mentioned next. When user clicks on shortened URL, user is redirected to long URL.
This shortened urls contains path to this endpoint. The service uses id used in last endpoint to get long URL. As response the user is redirected to long URL.
The objective of PDF generation service is to bulk generate pdf as per requirement.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Install npm.
Kafka server is up and running.
egov-persister service is running and has pdf generation persister config path added in it.
PSQL server is running and the database is created to store filestore id and job id of generated pdf.
Provide a common framework to generate PDF.
Provide flexibility to customise the PDF as per the requirement.
Provide functionality to add an image, Qr Code in PDF.
Provide functionality to generate pdf in bulk.
Provide functionality to specify a maximum number of records to be written in one PDF.
Create data config and format config for a PDF according to product requirement.
Add data config and format config files in PDF configuration
Add the file path of data and format config in the environment yml file
Deploy the latest version of pdf-service in a particular environment.
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be print/downloaded by the user.
Functionality to generate PDFs in bulk.
Avoid regeneration.
Support QR codes and Images.
Functionality to specify the maximum number of records to be written in one PDF.
Uploading generated PDF to filestore and return filestore id for easy access.
To download and print the required PDF _create API has to be called with the required key (For Integration with UI, please refer to the links in Reference Docs)
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Please refer to the for egov-location service to understand the structure of APIs and to have a visualisation of all internal APIs.
All content on this page by is licensed under a .
Add mdms configs required for nlp-engine service () and restart mdms service.
All content on this page by is licensed under a .
PDFMake: ( ):- for generating PDFs
Mustache.js: ( ):- as templating engine to populate format as defined in format config, from request json based on mappings defined in data config
For Configuration details please refer to
All content on this page by is licensed under a .
Environment Variables
Description
egov.services.egov_mdms.hostname
Host name for MDMS service.
egov.services.egov_mdms.searchpath
MDMS Search URL.
egov.service.egov.mdms.moduleName
MDMS module which contain boundary master.
egov.service.egov.mdms.masterName
MDMS master file which contain boundary detail.
Attribute Name
Description
tenantId
The tenantId (ULB code) for which the boundary data configuration is defined.
moduleName
The name of the module where TenantBoundary master is present.
TenantBoundary.hierarchyType.code
Unique code of the hierarchy type.
TenantBoundary.hierarchyType.name
Unique name of the hierarchy type.
TenantBoundary.boundary.id
Id of boundary defined for particular hierarchy.
boundaryNum
Sequence number of boundary attribute defined for the particular hierarchy.
name
Name of the boundary like Block 1 or Zone 1 or City name.
localname
Local name of the boundary.
longitude
Longitude of the boundary.
latitude
Latitude of the boundary.
label
Label of the boundary.
code
Code of the boundary.
children
Details of its sub-boundaries.
Title
Link
Local setup
Link
/boundarys/_search
/geography/_search
/tenant/_search
Environment Variables
Description
MDMS_MODULE_NAME
Contains the module name of mdms required for nlp-engine.
CITY_MASTER
Contains the file name of mdms master file which contains the city names in various locale.
CITY_LOCALE_MASTER
Contains the file name of mdms master file which contains the tenantid of the cities present in CityNames.json
mdms file
STATE_LEVEL_TENANTID
Contains the state level tenantid
Link
/nlp-engine/fuzzy/city
/nlp-engine/fuzzy/locality
Environment Variable
Description
host.name
Host name to append in short URL
db.persistance.enabled
The boolean flag to store the short URL in database when flag is set as TRUE.
Title
Link
Swagger API Contract
Local Setup
Environment Variables
Description
MAX_NUMBER_PAGES
Maximum number of records to be written in one PDF
DATE_TIMEZONE
Date timezone which will be used to convert epoch timestamp into date (DD/MM/YYYY)
DEFAULT_LOCALISATION_LOCALE
Default value of localisation locale
DEFAULT_LOCALISATION_TENANT
Default value of localisation tenant
DATA_CONFIG_URLS
File path/URL'S of data config
FORMAT_CONFIG_URLS
File path/URL'S of format config
Title
Link
Customizing PDF Receipts & Certificates
Steps for Integration of PDF in UI for download and print PDF
API Swagger Documentation
Link
pdf-service/v1/_create
pdf-service/v1/_createnosave
pdf-service/v1/_search
Property
Value
Remarks
egov.user.search.default.size
10
default search record number limit
citizen.login.password.otp.enabled
true
whether citizen login otp based
employee.login.password.otp.enabled
false
whether employee login otp based
citizen.login.password.otp.fixed.value
123456
fixed otp for citizen
citizen.login.password.otp.fixed.enabled
false
allow fixed otp for citizen
otp.validation.register.mandatory
true
whether otp compulsory for registration
access.token.validity.in.minutes
10080
validity time of access token
refresh.token.validity.in.minutes
20160
validity time of refresh token
default.password.expiry.in.days
90
expiry date of a password
account.unlock.cool.down.period.minutes
60
unlock time
max.invalid.login.attempts.period.minutes
30
window size for counting attempts for lock
max.invalid.login.attempts
5
max failed login attempts before account is locked
egov.state.level.tenant.id
pb
Link
/citizen/_create
/users/_createnovalidate
/_search
/v1/_search
/_details
/users/_updatenovalidate
/profile/_update
/password/_update
/password/nologin/_update
/_logout
/user/oauth/token
Title
Link
API Contract
Title
Link
This section provides technical details about business service setup, configuration, deployment, and API integration.
Title
Link
NLP Chatbot
Goal: To onboard developers onto the XState-Chatbot code base so that they can modify existing flows or create new ones.
This document sticks to explaining the chatbot's core features and does not dive into the use cases implemented by the chatbot. There is another document dedicated to it.
NodeJS
PostgreSQL
Kafka_(optional)_
Build a chat flow to facilitate a user to interact with rainmaker modules
Link a chat flow with backend services
Deploy the latest version of xstate-chatbot
Configure /xstate-chatbot to be a whitelisted open endpoint in zuul
Add indexer-config to the egov-indexer to index all the telemetry messages
Environment Variable
Description
WHATSAPP_PROVIDER
The provider through which WhatsApp messages are sent & received. An adapter for ValueFirst is written. If there is any new provider a separate adapter will have to be implemented.
A default console
adapter is provided for developers to test the chatbot locally.
REPO_PROVIDER
The database used to store the chat state. Currently, an adapter for PostgreSQL is provided.
An InMemory
adapter is provided to test the chatbot locally
SERVICE_PROVIDER
If it’s value is configured to be eGov, it will call the backend rainmaker services. If the value is configured as Dummy, dummy data would be used rather than fetching data from APIs.
Dummy option is provided for initial dialog development, and is only to be used locally.
SUPPORTED_LOCALES
A list of comma-separated locales supported by the chatbot.
Other configuration details are mentioned as part of the XState-Chatbot Integration Document.
This chatbot solves the basic form filling aspect of a chat flow. By collecting the information from the user, an API call can be made to the rainmaker backend services to fulfill what the user wants to do. It uses the concept of StateCharts (similar to State Machines) to maintain the state of the user in a chat flow and store the information provided by the user. XState is a JavaScript implementation of StateCharts. All chat flows are coded inside the XState framework.
This chatbot does not have any Natural Language Processing component. In the future, we can extend the chatbot to add such features.
XState is a JavaScript implementation of StateCharts. There is detailed documentation available to study XState. Few of the concepts of XState that are used in Chatbot are listed below. Basic knowledge of these concepts is necessary. It can also be learned while going through the chat flow implementation of pilot use cases of PGR and Bills.
Actions
onEntry
Few tips about using XState. These have been followed throughout the pilot chat flows.
If we want to move to any state which is not at the same hierarchical level, then we should assign it a unique id value. If it has an id value, we can address it using the # qualifier in the target attribute.
As id should be unique, please make sure there aren’t multiple states with the same id value. If there is a duplicate, the machine won’t function as expected.
Any actions(like onEntry) should be surrounded by assign.
This would include almost all functions except the guard condition code snippets.
All the interactions with the user - sending a message to the user and processing an incoming message from the user is coded as a state in the State Machine. It would be a nice start to test any chat flow with the supplementary react-app provided for the developers to execute the state machine locally. (Please follow the guidelines in the README of the react-app.)
We have followed few standard patterns to code any chat interaction. Please try to follow these patterns to code any new chat flow. These patterns are explained below. You can also study those by browsing through the code of the pilot use cases of PGR and Bills.
The chat states would only include dialog-specific code. Any code related to backend service should be written as a part of a separate …-service.js file.
Any code that doesn’t include any asynchronous API call can be written as a part of the onEntry function or action.
If the function needs to make an API call, that would have to be written with the invoke-on Done pattern. The asynchronous function should be written as a part of the service file. The consolidated data returned by it can be processed in the state of the dialog file.
Helper functions are written indialog.js
file. It is advised to use those functions as much as possible rather than writing any custom logic in dialog files.
Apart from the chat flow and its backend service API calls, few other components are present in the project. These components do NOT need to be modified to code any new chat flow or changing an existing chat flow. These components with a short description for each are listed below:
Session Manager: It manages sessions of all the users on a server. It will store the user’s state in a datastore, update it, and read it when any new message is received on the server. Based on the state of the user, it will create a state machine and send the incoming message event to the state machine. It sanctifies the state (any sensitive data like the name and mobile number of a user are removed) before storing the state to the datastore.
Repository: It is the datastore where the states of the users get stored. To reduce dependency, an in-memory repository is also provided, which can be used by configuring an environment variable. So to run the chatbot service, PostgreSQL isn’t a hard dependency, but it is advisable to use the PostgreSQL repo provider.
Channel Provider: There can be many different WhatsApp Providers. Any one of the providers will be configured to be used. A separate console
WhatsApp Provider is present for the developer to test the chatbot server locally. Postman collection to mimic receiving messages from a user to the server is present in the project directory.
Localization: Every message to be sent to the user is stored within the chatbot. Localization service is not being used. These messages are present near the bottom of the dialog files. A separate localization-service.js is provided to get the messages for the localization codes for the messages that are not owned by the chatbot. For example, the PGR complaint types data is under the ownership of the PGR module, and the messages for such can be fetched from the egov-localization-service using the functions provided in the localization-service.js.
Service Provider: To ease the initial dialog development, instead of the coding API calls to the backend services, we can configure the chat flow to use a dummy service. This can be configured using an environment variable and modifying the service-loader.js
file.
Telemetry: Chatbot logs telemetry events to a kafka topic. (Any sensitive data will get masked before indexing the events onto ElasticSearch by egov-indexer.) Following events get logged:
Incoming message
Outgoing message
Transition of state
Indexer service runs as a separate service. This service is designed to perform all the indexing tasks of the digit platform. The service reads records posted on specific kafka topics and picks the corresponding index configuration from the yaml file provided by the respective module. Objective of Indexer service are listed as below.
To provide a one stop framework for indexing the data to elasticsearch.
To create provision for indexing live data, reindexing from one index to the other and indexing legacy data from the datastore.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of Elasticsearch
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Performs three major tasks namely: LiveIndex, Reindex and LegacyIndex.
LiveIndex: Task of indexing the live transaction data on the platform. This keeps the es data in sync with the db.
Reindex: Task of indexing data from one index to the other. ES already provides this feature, indexer does the same but with data transformation.
LegacyIndex: Task of indexing legacy data from the tables to ES.
Provides flexibility to index the entire object, a part of the object or an entirely different custom object all using one input json from modules.
Provides features for customizing index json by field mapping, field masking, data enrichment through external APIs and data denormalization using MDMS.
One stop shop for all the es index requirements with easy-to-write and easy-to-maintain configuration files.
Designed as a consumer to save API overhead. The consumer configs are written from scratch to have complete control over the consumer behaviour.
Step 1: Write configuration as per your requirement. Structure of the config file is explained later in the same doc.
Step 2: Check-in the config file to a remote location preferably github, currently we check the files into this folder https://github.com/egovernments/configs/tree/DEV/egov-indexer -for dev
Step 3: Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-indexer. The file will be added to egov-indexer's environment manifest file for it to be read at start-up of the application.
Step 4: Run the egov-indexer app, Since it is a consumer, it starts listening to the configured topics and indexes the data.
For Indexer Configuration, please refer to the document in Reference Docs table given below.
a) POST /{key}/_index
Receive data and index. There should be a mapping with topic as {key} in index config files.
b) POST /_reindex
This is used to migrate data from one index to another index
c) POST /_legacyindex
This is to run LegacyIndex job to index data from DB. In the request body the URL of the service which would be called by indexer service to pick data, must be mentioned.
In legacy indexing and for collection-service record LiveIndex kafka-connect is used to do part of pushing record to elastic search. For more details please refer to document mentioned in document list.
XState-Chatbot is a revamped version of the chatbot, which provides functionality to the user to access PGR module services like file complaint, track complaint, notifications from whats app, It allows the user to view receipts and pay the bills for Property, Trade Licence, FireNOC, Water and Sewerage and BPA service module.
File PGR complaint
Track PGR complaint
Support images when filing complaints
Notifications to citizen when an employee performs any action on the complaint
Allow user to search and pay bills of different modules.
Allow user to search and view receipts of different modules.
Allow user to change the language of their choice to have a better experience.
Put user interactions on an elastic search for Telemetry.
XState chatbot can be integrated with any other module to improve the ease of search and view bills/past payment receipts and to improve speed and convenience for bill payment. It can be integrated with the PGR module for easiness of creation and tracking of the complaint.
Increase in convenience and ease of making the bill payment.
Increase in no. of users opting for online payment.
Improvement in demand collection efficiency
Creating an additional channel for payment.
Remove dependency on mobile/web app or counter.
Whatsapp provider is a third-party service that works in the middle of a user's WhatsApp client and XState-Chatbot server. All messages coming/going to/from user pass through WhatsApp provider. Chatbot calls WhatsApp provider to send messages to the user. When a user responds with any WhatsApp message the WhatsApp provider calls Chatbot service’s configured endpoint with details ex:- user sent message, sender’s number etc.
If any new WhatsApp provider is to be used with a chatbot, code must be written to convert the provider’s incoming messages to the format that the chatbot understands and also final output from the chatbot should be converted to WhatsApp provider’s API request format.
Currently, the XState-Chatbot service is using ValueFirst as the WhatsApp Provider. This will require provider-specific environment variable to be configured. If the provider changes then, all these environment variable will also change. Few of those environment variables are stored as secrets, so these values need to be configured in env-secrets.yaml.
As this is a revamped version of the chatbot service, all of the secrets should already be present. There is no need to create new secrets.
The integration of PGR with a chatbot can be enabled and disabled by making changes in this file. By exporting the respective PGR service file, the PGR service feature can be sable and vice versa.
Configuration of PGR version in chatbot
To configure the PGR module to use in Xstate-chatbot - the below variable values need to change in the environment file as per the requirement.
pgrVersion
pgrUpdateTopic
To configure PGR v2 in XState chatbot then pgrVersion should be ‘v2' and pgrUpdateTopic should be 'update-pgr-request’.
Configuration of city and locality search with nlp-search engine
To enable the fuzzy search for city and locality selection in PGR complaint flow The variable nlp-geoSearch has to be set true in the environment file. To use the nlp-search engine with xstate chatbot, make sure that stable build is deployed and all the mdms data are present for that particular environment. To know more about the nlp-search engine service please refer to the Reference document section.
Adding Information Image in PGR complaint creation and Open search information image
To configure the filestoreid for informational image follow the steps mention below
Download the images from the section Information Images for PGR and Open Search
Upload the image into filestore server. Use the upload file API from this postman collection(https://www.getpostman.com/collections/bdb059c5af698f0d81d6)
For PGR information image mention the filestore id here in environment file .
For Open search information image mention the filestore id here in environment file .
For example:
a) if supportedLocales: process.env.SUPPORTED_LOCALES || 'en_IN,hi_IN'
then valuefirst-notification-resolved-templateid: "12345,6789"
b) if supportedLocales: process.env.SUPPORTED_LOCALES || 'hi_IN,en_IN'
then valuefirst-notification-resolved-templateid: "6789,12345"
(Note: Both the list should not be empty, it must contain at least one element)
Template messages with button are maintained in the same way as describe in previous section (Configuration of push notification template messages)
There are two type of button message
Quick Reply
Call To Action
More details can be found in the value first document.
The integration of the Bill payment and receipt search feature with the chatbot can be enabled and disable by making changes in this file. By exporting the respective bill service and receipt service file, the payment and receipt search feature can be enabled and vice versa.
Configuration of module for Bill payment and Receipt search
To configure the list of modules to appear as an option for payment and receipt, Add the module business service code in the list present in the environment file.
For example:
If bill-supported-modules: "WS, PT, TL"
then Water and Sewerage, Property, Trade license module would appear for bill payment and
receipt search.
Also add the message bundle, validation and service code for locality searcher in egov-bill and egov-receipt file.
Environment Variables
Description
WHATSAPP_BUSINESS_NUMBER
The mobile number to be used on server
VALUEFIRST_USERNAME
Username for configured number for sending messages to user through whatsapp provider API calls
VALUEFIRST_PASSWORD
Password for configured number for sending messages to user through whatsapp provider API calls
GOOGLE_MAPS_API_KEY
Maps API key to access geocoding feature
ROOT_TENANTID
Contains state level tenantid value
SUPPORTED_LOCALES
This variable contains the list supported language in chatbot. If there is a need to add new language in chatbot, then its respective locale need to add in this list.
PGR_VERSION
Contains PGR version value to use (i.e v1 or v2)
PGR_UPDATE_TOPIC
Depends on PGR version respective PGR update kafka topic name should mention here. Example: If PGR_VERSION: 'v2'
then PGR_UPDATE_TOPIC: 'update-pgr-request'
BILL_SEARCH_LIMIT
Limit for showing maximum number of bills on search.
RECEIPT_SEARCH_LIMIT
Limit for showing maximum number of receipts on search.
COMPLAINT_SEARCH_LIMIT
Limit for showing maximum number of complaints on search.
BILL_SUPPORTED_MODULES
Contains the list of modules to be use for bill payment and receipts search.
INFORMATION_IMAGE_FILESTORE_ID
Contains the filestoreid of informational image, which shows how to share the user current location.
OPEN_SEARCH_IMAGE_FILESTORE_ID
Contains the filestoreid of open search informational image, which shows how to use open search pay feature for bill payment
USER_SERVICE_HARDCODED_PASSWORD
This variable contain fixed value of login password and otp. This value has to configured in env-secrets.yaml.
GEO_SEARCH
Boolean flag to enable and disable city / locality nlp search
Configuration of Telemetry File
Add this telemetry file in config repo and mention the filename in respective environment yaml file.
Cron job mdms entry:
Information Images for PGR and Open Search
Title
Link
Chatbot Message Localisation
nlp-search engine
Title
Link
/xstate-chatbot/message
/xstate-chatbot/reminder
/xstate-chatbot/status
The Collection service is to serve as a revenue collection platform for all the billing systems through cash, cheque, dd, swipe machine. It enables payment for all services provided by the eGov platform at a single point for the Citizen and counter collection in municipal alike.
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic, etc.
Following services should be up and running:
egov-localization
egov-mdms
egov-idgen
egov-url-shortening
billing-service
Allows citizens to create a payment.
Allows employees to create the payment for the citizen indirectly.
provides facilities to capture partial and advanced payment based on configs.
allows payment cancellation to help with scenarios of bad checks and other failed payment scenarios.
Integrates with billing-service for demand back-update of payment.
deploy the latest version of the collection-services docker build.
The MDMS data configuration uses the same data updated by Billing-Service
Billing Service | Configuration-Details: Refer MDMS data config from here.
Following are the properties in the application.properties
Property
Value
Remarks
collection.receipts.search.paginate
true/false
By setting this property true, show you the search result of receipt in a bucket(page) which contains a certain number of records.
is.payment.search.uri.modulename.mandatory=true
TRUE/FALSE
Make module name in URI path mandatory
collection.receipts.search.default.size
Certain number (say 30)
Give the 30 records at a time and next 30 results are in the next page.
collection.is.user.create.enabled
true/false
By setting this property true, enabling the creation of user with receipt creation
receiptnumber.idname
This property is used for creation of receipt number using ID-GEN service
receiptnumber.servicebased
true/false
If servicebased is set to false, use default state level format for the format of receipt number and if it is set to true the format for the receipt number has to be mentioned in MDMS
receiptnumber.state.level.format
[cy:MM]/[fy:yyyy-yy]/[SEQ_COLL_RCPT_NUM]
Default state level format for the receipt number.
collection.payments.search.paginate
true/false
By setting this property true, show you the search result of payment records in a bucket(page) which contains a certain number of records.
egov.collection.payment-create
The kafka topic on which the record has to push/pull when payment is created.
egov.collection.payment-cancel
The kafka topic on which the record has to push/pull when payment is cancelled.
egov.collection.payment-update
The kafka topic on which the record has to push/pull when payment is updated.
Collection service can be integrated with any organization or system that wants a payment system to keep track of its payments. Organizations can customize part of the application or its functionality based on their requirements.
Easy payments and tracking of payments.
Configurable functionalities according to client requirement
Customer can create a payment using the /payments/_create
Actors on the system can keep track of payments using /payments/_search
endpoint
Once the payment is done but it encounters a technical issue outside of the system then it can be cancelled with /payments/_workflow
For employees to access the payments API the respective module name should be appended after the payment API path - /payments/PT/_workflow
- here PT refers to property module.
Port foward the collection-service to current environment where the IFSCCODE bankdetails data to be migrated. Find the sample command below. 1kubectl port-forward collection-services-76b775f976-xcbt2 8055:8080 -n egov
Import postman collection from API list which refers as /preexistpayments/_update
and run with the same localhost to where we port forwarded using above command.
Expected result. In EGCL_PAYMET table where IFSCODE data is present for those record, EGCL_PAYMET.ADDITIONALDETAILS bankdetails will be updated.
Ex: For IFSCCODE : UCBA0003047 Response from API https://ifsc.razorpay.com/UCBA0003047 will be update in EGCL_PAYMET.ADDITIONALDETAILS as {"bankDetails": {"UPI": true, "BANK": "UCO Bank", "CITY": "BHIKHI", "IFSC": "UCBA0003047", "IMPS": true, "MICR": "151028452", "NEFT": true, "RTGS": true, "STATE": "PUNJAB", "SWIFT": "", "BRANCH": "BHIKHI", "CENTRE": "MANSA", "ADDRESS": "ADJOINING HP PETROL PUMP MANSA ROADDISTRICT MANSA","BANKCODE":"UCBA","DISTRICT":"MANSA","CONTACT":"+918288822548"}
Billing-Collection-Integration Refer integration with details and explanation.
Title
Link
Billing-service
Id-Gen service
url-shortening
MDMS
Title
Link
/payments/_create
/payments/_update
/payments/_workflow
/preexistpayments/_update
The consumer sometimes needs additional amounts (Amendments) added to their bill due to reasons from outside of the system. The addition of amounts happens with respect to the consumer code of the entity in the product(PT, WS, etc..,), any unpaid demand in the system is a candidate for amendments.
Prior Knowledge of Billing-Service in Digit framework.
Amendment mainly works with two types of functionality as follows:
Amendment
Demand
Bill Amendment provides a separate flow to enable workflow and validation for the process of adding additional amounts into the existing demands which were done through the respective modules only till this point in time. An amendment will be allowed only when the reason arises from out of the system to add or reduce the amount from the existing bill belonging to an entity. The reasons are as listed
Court case settlement
One time waiver
Write-offs
DCB correction (Old demands in paid status)
Remission for Property Tax
Criteria:
There are certain prerequisites to create an amendment,
presence of demand in the billing system
One of the Reason as Listed above
Valid document proof for the reason
No other Amendment already in workflow
Procedure:
The process of adding Amendment is as follows
Please follow the scenarios and let me know in case of doubts. There are two scenarios on how an amendment will be completed which is based on the paid status of the existing demands in the system.
1. when demand is unpaid/partially paid
create a demand (Or an existing demand can be used) with demand detail → DD1.
Do not pay the bill or make payment partially.
Create an amendment for the same consumer-code (with demand detail → DD2).
approve the amendment, the response should return an amendment with status CONSUMED.
search the demand or fetch bill for the consumer-code, demand/bill should contain demand details of demand and amendment together DD1 and DD2 in the same demand/bill.
2. when demand is completely paid,
create demand and make complete payment or choose a consumer-code which is fully paid.
create amendment (with demand detail → DD1).
Approve amendment, the response should be APPROVED this time.
create new demand for the consumer -code (with demand detail → DD3), demand response should contain two demand details DD1 and DD2 saved to the demand.
Now amendment search will return CONSUMED status after the demand is created.
IMPACT: Does not impact any other functionality other than adding demand details to demands on APPROVAL.
IMPACTED BY: Existence of demands in the system.
WORKFLOW CONFIG:
Amendment integration helps the respective Organization to add additional value to the demand without any change in the system.
Easy to create and simple process of updating demands
helps ease changes into the system which are not part of normal functionality - Amendment of bills in case of legal requirements.
This is integrated into the billing system by default.
Amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create and /amendment/_update can be used to cancel the created ones or update workflow if configured.
{yet to be addded}
API Definition
API LIST
v2 configuration details
The Collection service is to serve as a revenue collection platform for all the billing systems through cash, cheque, dd, swipe machine. It enables payment for all services provided by the eGov platform at a single point for the Citizen and counter collection in municipal alike.
Prior Knowledge of Java/J2EE
Prior Knowledge of SpringBoot
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc
Prior Knowledge of Kafka and related concepts like Producer, Consumer, Topic, etc.
Following services should be up and running:
egov-localization
egov-mdms
egov-idgen
egov-url-shortening
billing-service
Allows citizens to create a payment.
Allows employees to create the payment for the citizen indirectly.
provides facilities to capture partial and advanced payment based on configs.
allows payment cancellation to help with scenarios of bad checks and other failed payment scenarios.
Integrates with billing-service for demand back-update of payment.
deploy the latest version of the collection-services docker build.
The MDMS data configuration uses the same data updated by Billing-Service
Following are the properties in the application.properties
Collection service can be integrated with any organization or system that wants a payment system to keep track of its payments. Organizations can customize part of the application or its functionality based on their requirements.
Easy payments and tracking of payments.
Configurable functionalities according to client requirement
Customer can create a payment using the /payments/_create
Actors on the system can keep track of payments using /payments/_searchendpoint
Once the payment is done but it encounters a technical issue outside of the system then it can be cancelled with /payments/_workflow
For employees to access the payments API the respective module name should be appended after the payment API path - /payments/PT/_workflow - here PT refers to property module.
Doc Links
API List
The main objective of the billing module is to serve the Bill for all revenue Business services. To serve the Bill, Billing-Service requires demand. Demands will be prepared by Revenue modules and stored by billing based on which it will generate the Bill.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of KAFKA
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of the demand-based systems.
Following services should be up and running:
user
MDMS
Id-Gen
URL-Shortening
notification-sms
eGov billing service creates and maintains demands.
Generates bills based on demands.
Updates the demands from payment when the collection service takes a payment.
Deploy the latest image of the billing service available.
In the MDMS data configuration, the following master data is needed for the functionality of the billing.
MDMS
Business Service JSON
TAX-Head JSON
Tax-Period JSON
Billing service can be integrated with any organization or system that wants a demand-based payment system.
Easy to create and simple process of generating bills from demands
The amalgamation of bills period-wise for a single entity like PT or Water connection.
Amendment of bills in case of legal requirements.
Customer can create a demand using the /demand/_create
Organization or System can search the demand using /demand/_searchendpoint
Once the demand is raised the system can call /demand/_update endpoint to update the demand as per need.
Bills can be generated using, which is a self-managing API that generates a new bill only when the old one expires /bill/_fetchbill.
Bills can be searched using /bill/_search.
Amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create and /amendment/_update can be used to cancel the created ones or update workflow if configured.
Interaction Diagram V1.1:
Doc Links
API List
What is apportioning?
Adjusting the receivable amount with the individual tax head.
Types of apportioning V1.1
Default order based apportioning(Based on apportioning order adjust the received amount with each tax head).V1.1
Types of apportioning V1.2: (TBD)
Proportionate based apportioning (Adjust total receivable with all the tax head equally)
Order & Percentage based apportioning(Adjust total receivable based on order and the percentage which is defined for each tax head).
Principle of apportioning
The basic principle of apportioning is, if the full amount is paid for any bill then each individual tax head should get nullify with their corresponding adjusted amount.
Example: Case 1: When there are no arrears all tax heads belong to their current purpose:
Case 2: Apportioning with two years of arrear: If the current financial year is 2014-15. Below are the demands
if any payment is not done, and we generating demand in 2015-16 then the demand structure will as follows:
DSS has two sides to it. One is the process in which the Data is pooled to ElasticSearch and the other being the way it is fetched, aggregated, computed, transformed and sent across. As this revolves around a variety of Data Set, there is a need for making this configurable. So that, tomorrow, given a new scenario is introduced, then it is just a configuration away from getting the newly introduced scenario involved in this flow of the process.
This document explains the steps on how to define the configurations for both sides of DSS. Analytics and Ingest Pipeline Services.
Ingest: Micro Service which runs as a pipeline and manages to validate, transform and enrich the incoming data and pushes the same to ElasticSearch Index
Analytics: Micro Service which is responsible for building, fetching, aggregating and computing the Data on ElasticSearch to a consumable Data Response. Which shall be later used for visualizations and graphical representations.
JOLT: JSON to JSON transformation library written in Java where the "specification" for the transform is itself a JSON document
Modules / Domain Level: These are the Services in this context. Each of the services, such as Property Tax, Trade License, Water and Sewerages are considered as Modules / Domains
Chart: Each individual graphical representation is considered as a Chart in specific. For example, a Metric of Total Collection is considered as a Chart.
Visualization: Group of different Charts is considered as a Visualization. For example, the group of Total Collection, Target Collection and Target Achieved is considered as a Metric Collection of Charts and thus it becomes a Visualization.
Below is the list of configurations -
Topic Context Configurations
Validator Schema
JOLT Transformation Schema
Enrichment Domain Configuration
JOLT Domain Transformation Schema
Descriptions
Topic Context Configurations
Topic Context Configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service is asked to receive those data and pass it through the pipeline, the context and the version of the data being received has to be set. This configuration is used to identify which kafka topic consumed the data and what is the mapping for that.
Validator Schema
Validator Schema is a configuration Schema Library from EveritBy passing the data against this schema, it ensures whether the data abides by the rules and requirements of the schema which has been defined.
JOLT Transformation Schema
JOLT is a JSON to JSON Transformation Library. In order to change the structure of the data and transform it in a generic way, JOLT has been used.
While the transformation schemas are written for each Data Context, the data is transformed against the schema to obtain a transformed data.
Enrichment Domain Configuration
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
JOLT Domain Transformation Schema
As a part of Enhancement, once the domain level object is obtained, we might not need the complete document as is in the end data product.
Only those parameters which should be or can be used for aggregation and representation are to be held and others are to be discarded.
In order to do that, we make use of JOLT again and write schemas to keep the required ones and discard the unwanted ones.
The above configuration is used to transform the data response in the enrichment layer.
Use case:- JOLT Transformation Schema for collection V2
JOLT transformation schema for payment-v1 has taken as a use case to explain the context collection and context version v2. The payment records are processed/transformed with the schema. The schema supports splitting the billing records into an independent new record. So if there are 2 bill items in the collection/payment incoming data then this results in 2 collection records in turn.
Here: $i, the variable value that gets incremented for the number of records of paymentDetails
$j, the variable value that gets incremented for the number of records of billDetails.
Note: For kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
Below is the list of configurations
Chart API Configuration
Master Dashboard Configuration
Role Dashboard Mappings Configuration
Description
Chart API Configuration
Each Visualization has its own properties. Each Visualization comes from different data sources (Sometimes it is a combination of different data sources)
In order to configure each visualization and its properties, we have Chart API Configuration Document.
In this, Visualization Code, which happens to be the key, will be having its properties configured as a part of the configuration and are easily changeable.
Master Dashboard Configuration
Master Dashboard Configuration is the main configuration that defines which Dashboards that are to be painted on the screen.
It includes all the Visualizations, their groups, the charts which comes within them and even their dimensions as what should be their height and width.
Role Dashboard Mappings Configuration
Master Dashboard Configuration which was explained earlier hold the list of Dashboards that are available. Given the instance where Role Action Mapping is not maintained in the Application Service, this configuration will act as Role - Dashboard Mapping Configuration
In this, each Role is mapped against the Dashboard which they are authorized to see
This was used earlier when the Role Action Mapping of eGov was not integrated. Later, when the Role Action Mapping started controlling the Dashboards to be seen on the client-side, this configuration is just used to enable the Dashboards for viewing.
Adding Roles and Dashboards :
To add a new role, RoleDashboardMappingsConf.json (roles node) configuration file has to be modified as below
Note: Any number of roles & dashboards can be added
Below as in Figure 9. is a sample to add a new role object, new dashboard object
To add a new dashboard, MasterDashboardConfig.json (dashboards node) has to be modified as below in Figure 10.
Note: dashboards array add a new dashboard as given below
To add new visualisations, again MasterDashboardConfig.json (vizArray node) has to be modified as below as shown in Figure 11.
Note: vizArray is to hold multiple visualizations
To add a new chart, chartApiConf.json has to be modified as shown below. A new chartid has to be added with the chart node object.
Metric chart Sample as shown in Figure 12.
Pie chart Sample as shown in Figure 13.
Line chart Sample as shown in Figure 14.
****
Table chart Sample: This chart comes in 2 kind - table and xtable.
table (as shown in Figure 15.) type allows to added aggregated fields added as available in the query keys, hence to extract the values based on the key, aggegationPaths needs to add along with their data type as in pathDataTypeMapping.
xtable(as shown in Figure 16.) type allows to add multiple computed fields with the aggregated fields dynamically added.
To add multiple computed columns, computedFields [] where actionName (IComputedField<T> interface), fields [] names as in existing in query key, newField as name to appear for computation must be defined.
Steps to create charts and visualise are:
Create/Add a chart in chartApiConf.json
Add a visualization for the existing dashboard in MasterDashboardConfig.json as defined above.
Or in order to create/add a new dashboard create the dashboard in MasterDashboardConfig.json and create a role in RoleDashboardConfig.json
Configuration Changes for DrillThrough :
Example Drill through in Ward table in Property Dashboard.
wardDrillDown is the visualization code for PT Drill Down. kind is the attribute that shows the type of visualization code. Apart from two things all the attributes are common.
Example Drill through in ComplaintList table in PGR Dashboard.
complaintDrillDown is the visualization code for PGR Drill Down.
The above complaintDrillDown visualization code called in the drill chart parameter.
This document aims to facilitate communication between the software developers and whoever is localising the chatbot messages. The goal is to make it clear and as unambiguous as possible.
The Google Sheet containing all the messages with the codes is:
The project is organised such that all the messages are contained within the files present inside the /machine directory. /service directory, which is present inside it, also includes files that could contain localization messages.
Guidelines to be followed by developers
(According to the standard pattern followed in the project, all the localization messages will be present near the end of the file in a JavaScript object named “messages”.)
Developers will be the ones first filling up the sheet with codes (and the English version of the messages). Below are the guidelines to be followed when writing the codes in the sheet:
The standard separator to be used is .(dot)
The first part is the filename—Eg: “pgr.”, when the filename is pgr.js.
Use “service.” as a prefix when the file is present inside the /service directory.
In the /service directory, filenames are like egov-pgr.js
For localization messages contained in those files, instead of writing “egov-pgr” just write “pgr”
So the prefix for such files would be “service.pgr.”
All the message bundles would be present in the “messages” object near the end of the file. They have been organized in a pattern in the JS object like fileComplaint.complaintType2Step.category.question
The corresponding localization code for such a message bundle in the sheet would be “pgr.fileComplaint.complaintType2Step.category.question”, where the first “pgr.” is added as the prefix for the file name.
Once the localization codes have been written correctly (and the English version of the messages) in the sheet, it should be easy to add the new message in the corresponding new column. Some guidelines to follow when adding new messages:
The parameter names are written within {{}} (double curly brackets)
The content inside these curly brackets should be written in English even when writing messages for any new language
Whenever any user logs an authorization token and a refresh token is generated for him. Using the auth token the client can make rest API calls to the server to fetch data. The auth token has an expiry period. Once the auth token is expired it cannot be used to make API calls. The client will have to generate a new authorization token. This is done by authenticating the refresh token with the server which then generates and sends new authorization token to the client. The refresh token avoids the need for the client to again login whenever Auth token expires.
Refresh token also has an expiry period and once it gets expired it cannot be used to generate new authorization token and the user will have to login again to get a new pair of authorization token and refresh token. Generally, the duration before the expiry of the refresh token is much longer compared to that of auth token. If the user logs out of the account both Auth token and the refresh token will become invalid.
Param
Description
access.token.validity.in.minutes
Duration in minutes for which the authorization token is valid
refresh.token.validity.in.minutes
Duration in minutes for which the refresh token is valid
API
Description
/user/oauth/token
Used to start the session by generating Auth token and refresh token from username and password using grant_type as password. The same API can be used to generate new auth token from refresh token by using grant_type as refresh_token and sending the refresh token with key refresh_token
/user/_logout
This API is used to end the session. The access token and refresh token will become invalid once this API is called. Auth token is sent as param in the API call
`
eGov Payment Gateway acts as a liaison between eGov apps and external payment gateways facilitating payments, reconciliation of payments and lookup of transactions' status'.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has pg service persister config path added in it
PSQL server is running and the database is created to store transaction data.
Create or initiate a transaction, to make a payment against a bill.
Make payment for multiple bill details [multi module] for a single consumer code at once.
Transaction to be initiated with a call to the transaction/_create API, various validations are carried out to ensure the sanctity of the request.
The response includes a generated transaction id and a redirect URL to the payment gateway itself.
Various validations are carried out to verify the authenticity of the request and the status is updated accordingly. If the transaction is successful, a receipt is generated for the same.
Reconciliation is carried out by two jobs scheduled via a Quartz clustered scheduler.
Early Reconciliation job is set to run every 15 minutes [configurable via app properties], and is aimed at reconciling transactions which were created 15 - 30 minutes ago and are in PENDING state.
Daily Reconciliation job is set to run once per day and is aimed at reconciling all transactions that are in PENDING state, except for ones which were created 30 minutes ago.
Axis, Phonepe and Paytm payment gateways are implemented.
Additional gateways can be added by implementing the Gateway interface. No changes required to the core packages.
Following properties in the application.properties file in egov-pg-service has to be added and set to default value after integrating with the new payment gateway. In the below table properties for AXIS bank, payment gateway is shown the same relevant property needs to be added for other payment gateways.
axis.active
Bollean lag to set the payment gateway active/inactive
axis.currency
Currency representation for merchant, default(INR)
axis.merchant.id
Payment merchant Id
axis.merchant.secret.key
Secret key for payment merchant
axis.merchant.user
User name to access the payment merchant for transaction
axis.merchant.pwd
Password of the user tp access payment merchant
axis.merchant.access.code
Access code
axis.merchant.vpc.command.pay
Pay command
axis.merchant.vpc.command.status
commans status
axis.url.debit
Url for making payment
axis.url.status
URL to get the status of the transaction
Deploy the latest version of egov-pg-service
Add pg service persister yaml path in persister configuration
The egov-pg-service acts as communication/contact between eGov apps and external payment gateways.
Record of every transaction against a bill.
Record of payment for multiple bill details for a single consumer code at once.
To integrate, host of egov-pg-service should be overwritten in helm chart
/pg-service/transaction/v1/_create should be added in the module to initiates a new payment transaction, on successful validation
/pg-service/transaction/v1/_update should be added as the update endpoint to updates an existing payment transaction. This endpoint is issued only by payment gateways to update the status of payments. It verifies the authenticity of the request with the payment gateway and forward all query params received from a payment gateway
/pg-service/transaction/v1/_search should be added as the search endpoint for retrieving the current status of a payment in our system.
Title
Link
Swagger API Contract
Title
Link
/pg-service/transaction/v1/_create
/pg-service/transaction/v1/_update
/pg-service/transaction/v1/_search
/pg-service/gateway/v1/_search
(Note: All the API’s are in the same postman collection, therefore, the same link is added in each row)
Apportion service is used to apportion the amount paid against a bill among the different tax heads based on the implemented algorithm. The default algorithm uses order of the tax head to apportion, the tax head with lowest order is apportioned off first while the highest order tax head is apportioned last.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has apportion persister config path added in it
PSQL server is running and database is created to store apportion audit data
Apportion payment in tax heads of bill
Apportion advance amount in tax heads of demand during demand creation
Deploy the latest version of egov-apportion-service service
Add apportion persister yaml path in persister configuration
There is no separate configuration required. The TaxHead master that is configured in the billing service is only used
Any payment service which wants to divide the paid amount into different tax head buckets can integrate with apportion service.
Apportions amount in tax heads
To integrate, the host of egov-apportion-service should be overwritten in helm chart
/apportion-service/v2/bill/_apportion should be called to apportion the bill
/apportion-service/v2/demand/_apportion should be called to apportion advance amount in demands
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
__
DIGIT is India's largest open-source platform for Urban Governance. It provides API based access to government functions enabling the government to provide facilities via integration with relevant service players. This document is aimed at System Integrators looking to provide bill collection facilities to their government customers using DIGIT as their governance platform. It outlines the integration approach with Billing and Collections services to enable fetching bill dues to Citizens and recording their payments into the system.
DIGIT is completely API driven and allows for data exchange with disparate systems using REST API calls. Most functional API are protected resources that can be accessed after proper authentication with the platform. The platform also checks for the right level of access for the given credentials. A bill collection flow consists of the following steps -
Authentication with DIGIT
Get Bill for the citizen using a service-specific query
Record the payment details against the bill
Optional - Get Payment API to fetch the details of the receipt
As the in-field team of the system integrator would already be making these calls to the integrator's own system (or a standard system like BBPS), integration with DIGIT is a server to server integration where the backend system of the integrator will make these calls to the DIGIT platform as per the need. The following diagram depicts the high-level flow of calls between On Field devices like PoS (On Field Device) to Backend of the Integrator (Integrator System) and from Backend of the Integrator to DIGIT (DIGIT Platform).
Note: The process of calling payment API results in a receipt creation.
DIGIT uses Swagger 2.0 as its API standard and all its APIs are documented in Swagger. Wherever needed this document will provide a link to our API documentation online. An example of typical request/response snippets necessary for integration is provided below in the respective sections. Please note that DIGIT being a multi-tenanted system, all APIs in DIGIT expect tenantid passed either in the query param or RequestBody (Please refer to detailed API documentation as indicated in sections below). The tenantid represents the modular operating unit for the operation of an API, e.g. in a municipal governance use case, a tenantid will represent one ULB. Your platform contact will help you access the configured list for your use case. Authentication API also expects tenantid (Your platform contact will help you with which one to use), however, based on the role as an integrator the OAUTH token in response can be used for unit/ULB level tenants in subsequent API calls (meaning you may not need one authentication per unit/ULB level tenant).
Authentication
To ensure data privacy and security, transactional APIs in DIGIT are protected under authentication. System integrators are requested to contact the respective state authority to get the necessary OAUTH tokens that can be used to access these APIs. Kindly note that apart from userid/password system may enforce IP based access control in which case integrator may be required to share IP or range of IPs from which the request will originate. To generate the access token based on the credentials provided, please use the following API - Given below is the example of the request and response, OAuth token to be used from the response is highlighted in Bold.
Request Snippet
Response Snippet
2. Fetching Bill
DIGIT allows the integrators to fetch the bills for citizens using the Consumer number of the respective service (e.g. Water charges, Property Service, Trade License). Please note that different services may have different notions of consumer number, e.g. for Water Charges consumer number will signify the "Connection number" while for Property it will be "Property Id". For some services, DIGIT also provides the facility to fetch bill by mobile number, please note that a bill search by mobile number may return multiple bills across services and may not return bills from services that do not support mobile-number based search. To support partial payment use case each bill in the response of the fetch bill API will indicate whether it is allowed to be partially paid and any minimum amount if partial payment is allowed. To fetch a bill from DIGIT, please ensure that OAuth token is generated as per the Authentication section above. Post that you can use the following API to fetch the bill -
Choose Billing Service from the dropdown
Go to the Bill section of BillingService
Go to the Bill tab
3. Make Payment
Once the bill is fetched from the DIGIT system, the system integrator is expected to relay it back to the Field Device. Integrator is expected to Initiate and collect the payment based on government preference indicated in the bill (can it be partially paid and if so the minimum amount etc.) and Citizen's preference of payment instrument etc. Once the payment is successfully done in the integrator's system, the integrator is expected to register the payment in DIGIT using the Payment Create API. Please note that a bill is considered unpaid/partially paid by DIGIT till appropriate receipts are created using this API - which means that a subsequent fetch of the bill, till this API is called, will return the original bill. DIGIT expects a Receipt (The result of calling payment API) to be created against the bill number returned in the fetch bill API, please note that a receipt needs to be created for each bill. Therefore, if a total payment represents multiple bills - One receipt creation per bill is expected (DIGIT supports multiple receipt creation in a single call). To create a receipt in DIGIT, please ensure that OAuth token is generated as per the Authentication section above. Post that you can use the following API to create the receipt -
Choose Collection Service from the dropdown
Go to payment
Go to the make payment
Migration details from v1 to v2
According to the new collection service, which follows the payment structure for storing the information about payments and payment details, it is necessary to migrate the old collection structure into the payment structure.
In the old collection service, for every transaction, the receipt number is generated on the bill detail level, as the bill contains multiple bill details each transaction is mapped to multiple receipt number. So after payment of a single bill, multiple receipt numbers are generated for it. The mapping of the transactions to the receipt number is changed in the new collection service.
In the new collection service, the receipt number is generated on bill level, so for every transaction for each bill, one receipt number is generated. So each bill for a consumer code and business service have one receipt number.
The records from tables egcl_receiptheader, egcl_receiptdetails, egcl_instrument, egcl_instrumentheader need to be transferred into tables egcl_payment, egcl_paymentdetail, egcl_bill, egcl_billdetial, egcl_billaccountdetail.
For smooth transaction of data, the record from the old receipt has been mapped according to payment structure, so that the new payment response can be formed with receipt data.
The table below provides the mapping between receipt and payment structure with some remarks.
After the creation of payment response with receipt data, it has been pushed into kafka topic “egov.collection.migration-batch” and with the persister, payment data is inserted into tables egcl_payment, egcl_paymentdetail, egcl_bill, egcl_billdetial, egcl_billaccountdetail.
Indexer config for the legacy data index and new payments.
persister config -
Please get these promoted before initiating the migration process. Migration happens through an API call, add role-actions based on your requirement. Otherwise, port-forwarding should work.
Find the API details below:
Endpoint: /collection-services/payments/_migrate?batchSize=100&offset= Body: { "RequestInfo": { "apiId": "Rainmaker", "action": "", "did": 1, "key": "", "msgId": "20170310130900|en_IN", "ts": 0, "ver": ".01", "authToken": "a6ad2a1b-821c-4688-a70e-4322f6c34e54" }
While restarting migration due to any failure, take the value of offset and tenantId printed in the logs and resume the migration process where it ended.
/collection-services/payments/_migrate?batchSize=100&offset=200&tenantId='pb.tenantId'
Collection-service build:- collection-services-db:9-COLLECTION_MIGRATION-e9701c4
This specifies the migration steps which is specific to payment index .
Add index name dss-payment_v2 as below:
In kibana, dev tools, apply the below command
Note: This name should be as the value present in ingest es.index.namemapping.json24 May 2021, 11:15 AM
Ingest pipeline application properties contain es.direct.push supposed to be set true for testing.
Note: After migration ensure dss-payment_v2 data has been populated and available
In kibana, dev tools verify using below command
A decision support system (DSS) is a composite tool that collects, organizes and analyzes business data to facilitate quality decision-making for management, operations and planning. A well-designed DSS aids decision-makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives and business models. DSS analysis helps organizations to identify and solve problems, and make decisions.
The Tech documentation is below
The swagger API for the backend is below
Swagger API for ingest
Target Upload File Template is below
All content on this page by is licensed under a .
V2 Technical Document for UI
This release for DSS focuses on improving user experience and ability given to the user to get deeper insights using drill through and comparison indicators in tables.
The release includes the following features:
Breadcrumbs for better navigation
Drill through options in tables and charts
Comparison indicators in Table
In addition to the left navigation panel, the addition of breadcrumbs is also useful to provide a better sense of the current page insight. It is also very much helpful for mobile navigation. The user can navigate using the breadcrumbs by clicking on the required parent menu.
Technical Implementation Details
It Works based on the Current Route URL and previous Route URL
File Details
The ability provided in DSS to configure the drill through for required options in tables as well as charts. The drill through options is useful in configuring the required hierarchy of data set. This helps users to go up to 'N' levels to get deeper insights
Technical Implementation Details:
Drill down /Drill through in Tables, is based on the drillDownChartId and filter.
Here chart id is used for the subsequent call to fetch the next table along with the applied/selected filters.
File Details
Drill throughs in piecharts :
It is Similar to Dilldown in tables, here Drill through in piecharts are based on the drillDownChartId field in the parent piechart
File Details,
Providing better insights about the metric performances of different dimensions, a comparison indicator is required inside data tables comparing usually with a different time range (last year/last month) and what is percentage change with time.
Technical Implementation Details:
For Comparing with previous year data in every table data, the same request object will be used by changing the time range to the previous year/month/week.
File Details
The following Method along with parameters is used to fetch the previous year data.
after receiving last year data it is compared with current year data and will be shown insight data will be shown, comparison logic is present in uiTable.js
TimeFilter
The current time component is not very intuitive and user friendly. So New library react-date-range was used to enhance the time filter.
File Details
Event Duration Graphs
Ability to generate graphs showcasing time spent between multiple events like average turnaround time, complaint assigning time, etc.
A DSS_EVENT_DURATION_GRAPH was added in the PGR config
A decision support system (DSS) is a composite tool that collects, organizes and analyzes business data to facilitate quality decision-making for management, operations and planning. A well-designed DSS aids decision makers in compiling a variety of data from many sources: raw data, documents, personal knowledge from employees, management, executives and business models. DSS analysis helps organizations to identify and solve problems, and make decisions.
Code Git Repos:
State-Level Admin
Commissioner
Domain-Level Employee
There are three types of dashboards -
Home page (refer figure 1).
Overview page (refer figure 2).
Module level dashboard (refer figure 3).
The home page contains multiple cards, each card is clickable.
There are two types of cards, i.e, Overview card and module-level card.
Overview and Module level card is differentiated by vizType,
Overview card: On click of overview card, it will navigate to overview page. vizType for Overview is collection.
Module Level card: On Click of Module level card, it will navigate to Module level dahsboard. vizType is module (i.e Property Tax, Trade License etc).
Request Payload for dashboardConfig:
auth-token : which is for authenticate the request and it will fetch from a local storage key called “Employee.token”
DashboardConfig API Response
roleName: Which type of user.
Visualizations: Key contains all configuration for displaying the visualization like rows with charts etc please refer to figure 1.3.
In Figure 1.3, vizType key will define the module UI like
Collection Chart & Module Chart refer the figure 1
****
Visualizations List
In dashboardConfig response visualizations key contains all rows & charts details(refer figure 1.3).
1.Each row contains the visual details like name,vizType,noUnit,isCollapsible,charts etc ****(refer figure 1.3).
name - Name of visualization.
vizType - type of visualization like COLLECTION,MODULE,METRIC-COLLECTION, PERFORMING-METRIC, CHART.
COLLECTION - In home page, contains the collection data (refer figure 1).
MODULE - In home page, contains the module level data (refer figure 1).
METRIC-COLLECTION - In Overview/Module Level Page, contains the collection data (refer figure 2.1).
PERFORMING-METRIC -In Overview/Module Level Page, contains the top/bottom performing data (refer figure 2.2).
CHART - In Overview/Module Level Page, contains the below visualizations (refer figure 2.3 to figure 2.7).
PIE CHART (refer figure 2.3)
LINE CHART (refer figure 2.4)
BAR CHART (refer figure 2.5)
HORIZONTAL BAR CHART (refer figure 2.6)
TABLE CHART (refer figure 2.7)
Figure : 2.1 - Metric-collection.
Figure : 2.2 - PERFORMING-METRIC
Figure : 2.3 - CHART - PIE
Figure : 2.4 - Chart - LINE
Figure : 2.5 - Chart - BAR
Figure: 2.6 - Chart - HORIZONTAL BAR
Figure: 2.7 - Chart - TABLE
Figure: 2.8 - GLOBAL FILTERS
Figure: 2.9 - DOWNLOAD & SHARE BUTTON
ULB Dashboard
ULB Dashboard is having different filters, i.e ULB’s and Wards/Blocks. The data to the filters are loaded from below MDMS API -
Each ULB dashboard, overview Dashboard and module-level pages contain different filters and are identified by roleName in configs API.
The Wards/Blocks filter is a dependable filter, which gets loaded on ULB selection.
In the ULB dashboard, the On-page ULB filter will be applied across all the charts and for the Performance chart, the default ULB filter will not be applied.
Overview and all module level page is having a ULB dashboard.
GLOBAL Filters (refer to figure 2.8)
Filters will be loaded from MDMS API.
Filters will be loaded on the basis of roleName,
Admin role: For the Module level page, Date, DDR and ULB filter will be loaded
For Overview level page, Date, DDR, ULB and Service filter will be loaded
Commissioner role: For the Module level page, Date, ULB and Wards/Blocks filters will be loaded.
For the Overview page, Date, ULB and Service filters will be loaded.
3.Denomination filter:
Denomination filter having three option to display amount and number in a particular format.
Crore
Lack
Unit
Denomination filter will not be applied to the percentage and text (refer to figure 2.10). The type of data is identified by a symbol in the plots of charts API.
Figure 2.10
Custom Date Filter
If duration < 15 days, it will display data day-wise.
If duration <= 30 days, it will display data week-wise.
If duration >30, it will display data monthly-wise.
Tabs
Currently, the dashboard is having two types of tabs,
Revenue (refer figure: 4.1).
Service (refer figure: 4.1).
Tabs are identified by name in visualizations of config API.
Table Chart with drill-down
In table response, filter key & drillDownChartId is having value means its Drill down table.
Cards
Each card header is localized and having an info icon with a tooltip option that displays the header and can display a description.
The number of cards in a row and in a page is driven by the backend. Backend provides the row number to each card where it should be displayed.
Card containing option icon which contains Image download and Image share option.
Image download and share user id from vizArray in order to differentiate each card in a page.
Download and Share (refer to figure 2.9)
1.Download having two option to download data, i.e, Image and PDF
Share:
Share creates the Image/PDF and uploads it S3 using below API and returns file id,
Using file Id file will be fetched using below API
Each S3 image will be shortened using below API
5. Configurations
BASE URL: End point of REST API for dashboard.
FILE Upload: End point of REST API for file upload.
FETCH FILE: End point of REST API for file fetch.
MDMS: End point of REST API for fetch MDMS Data.
SHORTEN URL: End point of REST API for Shorten URL, which is used for share via Email / What's app.
CHART COLOR CODE: Color code object for all charts.
MODULE LEVEL: for global filters, which contains services name & filter key.
SERVICES: for global filter, service filter.
6. Upload Localization keys:
code: pre-defined key for back-end.
message: message contains the value for the key.
module: rainmaker-dss
locale: contains locale data
for more details eGov team to be documented
Module name: rainmaker-dss
NPM Module Used
Steps to setup DSS in Local
Step 1: Run as independent, switch to dss-dashboard folder
Step 2: We have to get the below details from the environment website and update the localstorage in the browser.
Employee.tenant-id Employee.user-info Employee.token Employee.module Employee.locale localization_en_IN locale
Step 3: Run Yarn install and yarn start to start working on dss in local setup.
DSS Features Enhancements V2:
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Refer billing-service config for MDMS data. the amendment makes use of the same data set.
All content on this page by is licensed under a .
Refer MDMS data config from here.
Refer integration with details and explanation.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
for the full configuration in detail.
All content on this page by is licensed under a .
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Table chart visualization is having normal material UI data table features like search, sort etc.
Github link for config:
****
All content on this page by is licensed under a .
Title
Link
Collection Service
Billing Service
API Swagger Documentation
Title
Link
/apportion-service/v2/bill/_apportion
/apportion-service/v2/demand/_apportion
/amendment/_create, _update
Field from Payments
Field from Receipts
Remark
Payments.Id
---
Set as UUID
Payments.tenantId
Receipt.tenantId
Payments.totalDue
---
Total Due for payment is calculated by subtracting totalAmount from bill and amount from Receipt.instrument
Payments.totalAmountPaid
Receipt.instrument.amount
Payments.transactionNumber
Receipt.instrument.transactionNumber
Payments.transactionDate
Receipt.receiptDate
Payments.paymentMode
Receipt.instrument.instrumnetType.name
Payments.instrumentDate
Receipt.instrument.instrumentDate
Payments.instrumentNumber
Receipt.instrument.instrumentNumber
Payments.instrumentStatus
Receipt.instrument.instrumentStatus
Payments.ifscCode
Receipt.instrument.ifscCode
Payments.additionalDetails
Receipt.Bill.additionalDetails
Payments.paidBy
Receipt.Bill.paidBy
Payments.mobileNumber
Receipt.Bill.mobileNumber
If mobileNumber from Receipt.bill is null it has to set with some value e.g: “NA”
Note: Payments.mobileNumber should not be null
Payments.payerName
Receipt.Bill.payerName
Payments.payerAddress
Receipt.Bill.payerAddress
Payments.payerEmail
Receipt.Bill.payerEmail
Payments.payerId
Receipt.Bill.payerId
Payments.paymentStatus
--
Based on paymentMode from Payment, the paymentStatus is set.
If paymentMode is ONLINE or CARD then paymentStatus is set to DEPOSITED otherwise it is set to NEW
Payments.auditDetails.createdBy
Receipt.auditDetails.createdBy
Payments.auditDetails.createdTime
Receipt.auditDetails.createdTime
Payments.auditDetails.lastModifiedBy
Receipt.auditDetails.lastModifiedBy
Payments.auditDetails.lastModifiedTime
Receipt.auditDetails.lastModifiedTime
Payments.paymentDetails.Id
---
Set as UUID
Payments.paymentDetails.tenantId
Receipt.tenantId
Payments.paymentDetails.totalDue
---
Total Due for paymentDetails is calculated by subtracting totalAmount from bill and amount from Receipt.instrument
Payments.paymentDetails.totalAmountPaid
Receipt.instrument.amount
Payments.paymentDetails.receiptNumber
Receipt.receiptNumber
Payments.paymentDetails.manualReceiptNumber
Receipt.Bill.billDetails.manualReceiptNumber
Payments.paymentDetails.manualReceiptDate
Receipt.Bill.billDetails.manualReceiptDate
Payments.paymentDetails.receiptDate
Receipt.receiptDate
Payments.paymentDetails.receiptType
Receipt.Bill.billDetails.receiptType
Payments.paymentDetails.businessService
Receipt.Bill.billDetails.businessService
Payments.paymentDetails.additionalDetail
Receipt.Bill.additionalDetail
Payments.paymentDetails.auditDetail
---
auditDetail for paymentDetail is same as payment auditDetail
Payments.paymentDetails.billId
---
Based on id in egbs_billdetail_v1 table billId is extracted,Where id in egbs_billdetail_v1 is Receipt.Bill.billDetails.billNumber
Payments.paymentDetails.bill
---
Based on the billid, tenantid and service the bill is search by calling the Billing service API and set it to Payments.paymentDetails.bill
Payments.paymentDetails.bil.billDetails.amountPaid
Receipt.instrument.amount
For each amountPaid in billDetails, its value is set from Receipt.instrument.amount
Property
Value
Remarks
collection.receipts.search.paginate
true/false
By setting this property true, show you the search result of receipt in a bucket(page) which contains a certain number of records.
is.payment.search.uri.modulename.mandatory=true
TRUE/FALSE
Make module name in URI path mandatory
collection.receipts.search.default.size
Certain number (say 30)
Give the 30 records at a time and next 30 results are in the next page.
collection.is.user.create.enabled
true/false
By setting this property true, enabling the creation of user with receipt creation
receiptnumber.idname
This property is used for creation of receipt number using ID-GEN service
receiptnumber.servicebased
true/false
If servicebased is set to false, use default state level format for the format of receipt number and if it is set to true the format for the receipt number has to be mentioned in MDMS
receiptnumber.state.level.format
[cy:MM]/[fy:yyyy-yy]/[SEQ_COLL_RCPT_NUM]
Default state level format for the receipt number.
collection.payments.search.paginate
true/false
By setting this property true, show you the search result of payment records in a bucket(page) which contains a certain number of records.
egov.collection.payment-create
The kafka topic on which the record has to push/pull when payment is created.
egov.collection.payment-cancel
The kafka topic on which the record has to push/pull when payment is cancelled.
egov.collection.payment-update
The kafka topic on which the record has to push/pull when payment is updated.
Title
Link
Billing-service
Id-Gen service
url-shortening
MDMS
Title
Link
/payments/_create
/payments/_update
/payments/_workflow
PUT dss-payment_v2
{} // add mapping file content here. mapping.json as attached below
Sno
Property name
Value
Description
1.
es.direct.push
true
the transformed data will be pushed to ES index directly.
2.
es.direct.push
false
the transformed data will be lying at egov-dss-ingest-enriched topic
SNo
Name
Description
Method
End Point
Body
POST
{host}/dashboard-ingest/ingest/migrate/paymentsindex-v1/v2
{"RequestInfo":{"authToken":"2ba70924-1bba-4a9b-b55d-2e9471bf3081"}}
2.
CURL
curl -X POST https://dev.digit.org/dashboard-ingest/ingest/migrate/paymentsindex-v1/v2 -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'postman-token: d83fc136-116d-265f-3b83-ea41e3d5bb57' -d '{"RequestInfo":{"authToken":"2ba70924-1bba-4a9b-b55d-2e9471bf3081"}}'
S.No.
API
Action id
Roles
1
/localization/messages/v1/_search
1531
SUPERUSER,EMPLOYEE,CITIZEN,GRO,DGRO,
2
/egov-mdms-service/v1/_search
954
LOA_CREATOR,SUPERUSER,WO_CREATOR,AE_CREATOR,WORKS_MASTER_CREATOR,
3
/dashboard-analytics/dashboard/getDashboardConfig/propertytax
1892
STADMIN
4
/dashboard-analytics/dashboard/getDashboardConfig/home
1889
STADMIN
5
/dashboard-analytics/dashboard/getDashboardConfig/tradelicense
1893
STADMIN
6
/dashboard-analytics/dashboard/getDashboardConfig/pgr
1894
STADMIN
7
/dashboard-analytics/dashboard/getDashboardConfig/ws
2010
STADMIN
8
/dashboard-analytics/dashboard/getChartV2
1890
STADMIN, EMPLOYEE
bs.businesscode.demand.updateurl
Each module’s application calculator should provide its own update URL. if not present then a new bill will be generated without making any changes to the demand.
bs.bill.billnumber.format
BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}]
IdGen format for the bill number
bs.amendment.idbs.bill.billnumber.format
BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}]
is.amendment.workflow.enabled
true/false
enable disable workflow of bill amendment
Title
Link
Id-Gen service
****
url-shortening
MDMS
Title
Link
/demand/_create, _update, _search
/bill/_fetchbill, _search
/amendment/_create, _update
TaxHead
Amount
Order
Full Payment(2000)
Partial Payment1(1500)
Partial payment2(750)
Partial payment2 with rebate(500)
Pt_tax
1000
6
1000
1000
750
750
AdjustedAmt
1000
-250
-750
-750
RemainingAMTfromPayableAMT
0
0
0
0
Penality
500
5
500
500
AdjustedAmt
500
-500
RemainingAMTfromPayableAMT
1000
250
Interest
500
4
500
500
AdjustedAmt
500
-500
RemainingAMTfromPayableAMT
1500
750
Cess
500
3
500
500
AdjustedAmt
500
-500
RemainingAMTfromPayableAMT
2000
1250
Exm
-250
1
-250
-250
AdjustedAmt
-250
250
RemainingAMTfromPayableAMT
2250
1750
Rebate
-250
2
-250
-250
AdjustedAmt
-250
250
RemainingAMTfromPayableAMT
2500
750
TaxHead
Amount
TaxPeriodFrom
TaxPeriodTo
Order
Purpose
Pt_tax
1000
2014
2015
6
Current
AdjustedAmt
0
Penality
500
2014
2015
5
Current
AdjustedAmt
0
Interest
500
2014
2015
4
Current
AdjustedAmt
0
Cess
500
2014
2015
3
Current
AdjustedAmt
0
Exm
-250
2014
2015
1
Current
AdjustedAmt
0
TaxHead
Amount
TaxPeriodFrom
TaxPeriodTo
Order
Purpose
Pt_tax
1000
2014
2015
6
Arrear
AdjustedAmt
0
Pt_tax
1500
2015
2016
6
Current
AdjustedAmt
0
Penalty
600
2014
2015
5
Arrear
AdjustedAmt
0
Penalty
500
2015
2016
5
Current
AdjustedAmt
0
Interest
500
2014
4
Arrear
AdjustedAmt
0
Cess
500
2014
3
Arrear
AdjustedAmt
0
Exm
-250
2014
1
Arrear
AdjustedAmt
0
Parameter Name
Description
topic
Holds the name of the Kafka Topic on which the data is being received
dataContext
Context Name which needs to be set for further actions in the pipeline
dataContextVersion
Version of the Data Structure is set here as there might be different structured data at a different point in time
Paremter Name
Description
id
Unique Identifier for the Configuration within the configuration document
businessType
This defines as in which kind of Domain / Service is the data related to. Based on this business type, query and enhancements are decided
indexName
Based on Business Type, Index Name is defined as to which index has to be queried to get the enhancements done from
query
Query to execute to get the Domain Level Object is defined here.
targetReferences
sourceReference
Fields which are variables in order to get the domain level objects are defined here. The variables and where all the values has to be picked from are documented here
Parameter Name
Description
Key (e.g: totalApplication)
This is the Visualization Code. This key will be referred to in further visualization configurations. This is the key that will be used by the client application to indicate which visualization is needed for display.
chartName
The name of the Chart has to be used as a label on the Dashboard. The name of the Chart will be a detailed name. In this configuration, the Name of the Chart will be the code of Localization which will be used by Client Side
queries
Some visualizations are derived from a specific data source. While some others are derived from different data sources and are combined together to get a meaningful representation. The queries of aggregation which are to be used to fetch out the right data in the right aggregated format are configured here.
queries.module
The module / domain level, on which the query should be applied on. Property Tax is PT, Trade License is TL. If the query is applied across all modules, the module has to be defined as COMMON
queries.indexName
The name of the index upon which the query has to be executed is configured here.
queries.aggrQuery
The aggregation query in itself is added here. Based on the Module and the Index name specified, this query is attached to the filter part of the complete search request and then executed against that index
queries.requestQueryMap
Client Request would carry certain fields which are to be filtered. The parameters specified in the Client Request are different from the parameters in each of these indexed documents. In order to map the parameters of the request to the parameters of the ElasticSearch Document, this mapping is maintained
queries.dateRefField
Each of these modules has separate indexes. And all of them have their own date fields.
When there is a date filter applied against these visualizations, each of them has to apply it against their own date reference fields. In order to maintain what is the date field in which index, we have this configured in this configuration parameter
chartType
As there are different types of visualizations, this field defines as what is the type of chart / visualization that this data should be used to represent.
Chart types available are:
metric - this represents the aggregated amount/value for records filter by the aggregate es query
pie - this represents the aggregated data on grouping. This is can be used to represent any line graph, bar graph, pie chart or donuts
line - this graph/chart is data representation on date histograms or date groupings
perform - this chart represents groping data as performance-wise.
table - represents a form of plots and value with headers as grouped on and list of its key, values pairs.
xtable - represents an advanced feature of the table, it has additional capabilities for dynamic adding header values.
valueType
In any case of data, the values which are sent to plot might be a percentage, sometimes an amount and sometimes it is just a count. In order to represent them and differentiate the numbers from the amount from percentage, this field is used to indicate the type of value that this Visualization will be sending.
action
Some of the visualizations are not just aggregating on data source. There might be some cases where we have to do a post aggregation computation. For Example, in the case of Top 3 Performing ULBs, the Target and Total Collection is obtained and then the percentage is calculated.
In these kinds of cases, what is the action that has to be performed on that data obtained, is defined in this parameter.
documentType
The type of document upon which the query has to be executed is defined here.
drillChart
If there is a drill down on the visualization, then the code of the Drill Down Visualization is added here.
This will be used by Client Service to manage drill-downs
aggregationPaths
All the queries will be having Aggregation names in it. In order to fetch the value out of each Aggregation Responses, the name of the aggregation in the query will be an easy bet. These aggregation paths will have the names of Aggregation in it.
_comment
In order to display information on the “i” symbol of each visualization, Visualization Information is maintained in this field.
Parameter Name
Description
name
Name of the Dashboard which has to be displayed as Page Heading
id
Unique Identifier of the Dashboard which should be used later for Querying each of these Visualizations
isActive
Active Indicator which can be used to quickly disable a dashboard if required.
style
Style of the Dashboard. Whether it should be a linear one or a tabbed one. This information is maintained in this parameter.
visualizations
The list of visualizations that are to be displayed in the Dashboard is listed out here.
visualizations.row
The row identifier for each Visualization are mentioned here
The name of an individual visualization is added here
visualizations.vizArray
The list of Charts within the Visualization is specified in this list.
Group of Charts is given an ID to have a placement on the Dashboard. This unique identifier is maintained in this field.
Group of Charts is given a name that can be displayed on the group on Dashboard in that row.
visualizations.vizArray.dimensions
Each of these group of charts is given a dimension based on which they are placed in a specific row in a dashboard
visualizations.vizArray.vizType
As there are multiple charts grouped into one visualization, the type of Visualization needs to be specified in order to indicate to the client application what goes inside each of these visualizations and charts inside them
vizType used for any other dashboards:- metric-collection, chart, performing-metric
metric-collection:- Used to specify the type as single or group of metric chart type
2. performing-metric:- Used perform chart type
3. chart:- Used chart type for pie, donut, table, bar, horizontal bar, line
vizType used for the Home page:- collection, module
collection: used in UI style as full width
2. module: used in UI style for specific width.
visualizations.vizArray.noUnit
visualizations.vizArray.isCollapsible
visualizations.vizArray.ref
The value types of these charts are different. Some are numbers, some are amounts, some are percentage.
In the case of amounts, there is a requirement to display in Lakhs, Crores and Units. In order to indicate the client application whether to display these units or not, we have this boolean to control that
The value type is for card/visualisation collapsible as boolean values.
This object contains url (as mandatory), logoUrl (optional), type(optional).
visualizations.vizArray.charts
The list of individual charts inside a Visualization Group is maintained in this array list
Individual Chart Number Identifier to indicate the uniqueness of Charts
Name of the Chart which can be a header label for Charts within a Visualization
visualizations.vizArray.charts.code
Code of the Chart is the indicator that has to be sent to Server Side to get the data for representing the Visualization.
visualizations.vizArray.charts.chartType
Type of Chart which has to represent the data result set that is obtained is specified here
chartType:- bar, horizontalBar, line, donut, pie, metric, table
visualizations.vizArray.charts.filters
Filters that can be applied to the Visualization and what are the fields which are filterable are mentioned here.
visualizations.vizArray.charts.headers
In some cases, there are headers which can be a title or additional information for the Chart Data which gets represented. This field is kept open to accommodate the information which can be sent along with the Chart Data in itself.
Parameter Name
Description
roles
List of Roles that are available in the system
roles._comment
Role Description and comment on why does this role has an entry in this configuration and sums up the summary as to what are the things that are to be enabled.
roles.roleId
Unique Identifier of the Role for which Access is being given
roles.roleName
Name of the Role for which the access is being given
roles.isSuper
Boolean flag which defines whether the Role is a Super User or not
roles.orgId
Organization to which the Role belongs to
roles.dashboards
List of Dashboards that are enabled for the Role
Name of the individual Dashboard which has been enabled
Identifier of the individual Dashboard which has been enabled
Environment Variables
Description
egov.apportion.default.value.order
If set to true will apportion of the negative amount first irrespective of tax head order
The inbox service is an aggregation service which aggregates data of municipal services and workflow based on given complex search criteria and returns applications and workflow data in paginated manner. The service also returns the total count matching the search criteria.
This service allows to search both the module objects as well as processInstance
(Workflow record) based on the provided criteria for any of the municipal services. For this, it uses a module specific configuration which is stored in application.properties as a key value map, where the key is the businessService name while the value is the configuration map. An sample configuration is attached below -
Here, the key of the config map are the business services of PT module for which inbox has to be configured. Now, inside the search definition -
searchPath
- Points to the search URL of the municipal module
dataRoot
- This is the search response key that we get from module search, e.g. in Property module, the search response returns response objects inside “Properties” key.
applNosParam
- This is the parameter with which workflow search is called once the module objects are retrieved from module search. This parameter is the filed on which module table is joined with the workflow process instance table, e.g. in case of Property module it is “acknowldgementNumber”.
businessIdProperty
- This is the parameter with which we search module objects in case of empty moduleSearchCriteria
by performing the workflow search first. Again, this parameter is the field on which we join module table and workflow process instance table, e.g. in case of Property module it is “acknowldgementNumber”.
applsStatusParam
- This is the application status field name for the module upon which search is being performed, e.g. in case of Property module, it is “status”.
To provide pagination and total count across multiple modules, the inbox service is integrated with searcher. The searcher provides the list of ids and the total count of applications, based on those further enrichment is done by inbox service and results are returned to the API. Sample configuration link for PT and TL module is attached below:
Details will be updated soon...
DIGIT offers key municipal services such as Public Grievance & Redressal, Trade License, Water & Sewerage, Property Tax, Fire NOC, and Building Plan Approval.
Configure billing services to allow bill amendments
The consumer sometimes needs additional amounts (Amendments) added to their bill due to reasons external to the system. The addition of amounts happen with respect to the consumer-code of the entity in the product(PT, WS etc..,), any unpaid demand in the system is a candidate for amendments.
Amendment mainly works with two types of functionality as follows:
Amendment
Demand
The main objective of the Bill-Amendment module is to create Credit/ Debit Notes against the bills for consumers who need an additional amount to be added to their bill.
Create Amendment
Search Amendment
Update Demand
Update Amendment
Bill Amendment provides a separate flow to enable workflow and validation for the process of adding additional amount into the existing demands which were done through the respective modules only till this point in time. An amendment will be allowed only when the reason arises from out of the system to add or reduce the amount from the existing bill belonging to an entity. The reasons are as listed below -
Court case settlement
One time waiver
Write-offs
DCB correction (Old demands in paid status)
Remission for Property Tax
There are certain prerequisites to create an amendment,
Presence of demand in the billing system
One of the reasons listed above
Valid document proof for the reason
No other Amendment already in workflow
The process of adding Amendment is as follows
There are two scenarios on how an amendment will be completed which is based on the paid status of the existing demands in the system.
1. When demand is unpaid/partially paid
create a demand (Or an existing demand can be used) with demand detail → DD1.
Do not pay the bill or make payment partially.
Create an amendment for the same consumer-code (with demand detail → DD2).
approve the amendment, the response should return an amendment with status CONSUMED.
search the demand or fetch bill for the consumer-code, demand/bill should contain demand details of demand and amendment together DD1 and DD2 in the same demand/bill.
2. When demand is completely paid
create demand and make complete payment or choose a consumer-code which is fully paid.
create amendment (with demand detail → DD1).
Approve amendment, the response should be APPROVED this time.
create new demand for the consumer -code (with demand detail → DD3), demand response should contain two demand details DD1 and DD2 saved to the demand.
Now amendment search will return CONSUMED status after the demand is created.
Does not impact any other functionality other than adding demand details to demands on APPROVAL.
IMPACTED BY:
Existence of demands in the system.
Reap benefit system is one of the vendors that provide the chatbot services using the turn as backend services to communicate with citizen through chatbot. As part of the requirement, we need to create a complaint in digit platform when ever citizen has raised the complaint through reap benefit chatbot.
turn-io-adapter service is a wrapper to transform reap benfit request format to digit pgr request format. this service will have _transform api and it will construct requried pgr request from the request message sent by reap benfit system. Reap benfit system will consume _tranform api to communicate with digit pgr mdoule.
In this process, once a complaint is created it sends a Whatsapp message to the citizen with a track link. Whenever some action taken by ULB employes on complaint, we will send whatsapp message to citizen.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Rainmaker-PGR service is running
Complaint can create in digit platform from reapbenfit system chatbot
message will sent to citizen through whatsapp when employee perform some action on complaint
Please deploy the following builds
rainmaker-pgr-db:v1.1.3-bb2961cf-13
turn-io-adapter:v1.1.3-bb2961cf-19
egov-searcher:v1.1.3-d43c421c-5
nlp-engine:v1.0.0-c3889d14-10
Note: Please refer to the following url for nlp-engine techical documentation.
Frontend commits
1) turn-io-adapter: "http://turn-io-adapter.egov:8080/" (In service host configuration)
2) Add /turn-io-adapter/_transform in egov-mixed-mode-endpoints-whitelist configuration
3) Once you are done with 2nd step restart zuul pod
We need to add name filed in complaint category master in pgr. Please find the below link for data.
Push the localization data for all the locality data with module as rainmaker-chatbot. Please find the below sample localization object.
{ "code": "SC1", "message": "Azad Nagar - WARD_1", "module": "rainmaker-chatbot", "locale": "en_IN" }
NA
This is the samplerequest for _transform api to create a complaint
Turn-io-adapter will be integrated with Rainmaker-pgr Application. Turn-io-adapter Application internally invokes the rainmaker-pgr service to generate the complaint.
Turn-Io-adapter application to call turn-io-adapter/_transform
to generate the complaint and takes the data from the pgr.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.