This documentation talks about building a new dashboard in the DSS and also it defines the configurations required for the analytics service. Analytics microservice which is responsible for building, fetching, aggregating, and computing the data on ElasticSearch to a consumable data response. Which shall be later used for visualizations and graphical representations.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of JSON
Prior Knowledge of Elasticsearch Query Language
Prior Knowledge of Kibana
DSS setup
Adding new roles for dashboards
Adding a new dashboard
Adding new visualizations in the existing dashboard
Adding new charts for visualizations
To add a new role, make changes in the RoleDashboardMappingsConf.json (roles node) configuration file as below. In the roles array, every JSON object is unique based on the id. The name of the role is defined in the roleName attribute.
If we want to assign any dashboard to a particular role, add the id and name of the dashboard in the dashboard array. This dashboard id is unique and it’s referred to as the MasterDashboardConfig.json file configuration.
Any number of roles & dashboards can be added
Below is a sample to add a new role object
2. Adding a new Dashboard
To add a new dashboard, make changes in the MasterDashboardConfig.json (dashboards node) as below.
Add the new JSON object in the dashboards array. Add the dashboard name in the name attribute, Id should be unique, which is used for assigning a role for the dashboard. We will talk about visualizations below.
Dashboards array add a new dashboard as given below
To add new visualizations, make changes again in the MasterDashboardConfig.json (vizArray node) as below. Add the visualization name to the name attribute. We will add all the visualizations in the vizArray array. vizArray will contain the name of the visualization, vizType as visual type, noUnit, and charts.
charts array contains chart API configuration query details. The id is referred to as the chartapiconfig.json file’s key to fetch the required data from elastic search’s index. And the name attribute is referred to as the name of the chart in localization.
vizArray is to hold multiple visualizations
To add a new chart, chartApiConf.json has to be modified as shown below. A new chartid (key of the JSON) has to be added to the chart node object. The chartid JSON contains the chart name, chart type, valueType, documentType, aggregationPaths and queries attribute.
Types of the chart: Metric, Pie, Line, Table, and xtable
AggregationPaths: Query result will take from this path.
valueType: Based on the value type result will be shown in the UI. Different types of valueType are Amount, percentage, and number.
queries array will contain the information of the module, requestQueryMap (request param of the API), dateRefField (Based on this field date data will be filtered), indexName, and aggrQuery. We can add multiple module queries in a single chart.
For more information please refer to the reference documents listed below.
A decision support system (DSS) is a composite tool that collects, organizes, and analyzes business data to facilitate quality decision-making for management, operations, and planning. A well-designed DSS aids decision-makers in compiling a variety of data from many sources: raw data, documents, and personal knowledge from employees, management, executives, and business models. DSS analysis helps organizations identify and solve problems, and make decisions
This document explains the steps on how to define the configurations & set up the new dashboard in the DSS.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Spring boot
Prior knowledge of Kafka
Prior knowledge of Elastic Search
Prior knowledge of Kibana
Prior knowledge of EQL (Elastic Query Language)
Prior knowledge of JSON
Creating a DSS dashboard schema
DSS ingest service APIs
Ingest service configurations
Creating a Kafka sync connector to push the data to elastic search
When we are going to start indexing the DSS collection v2 index. We should create the schema in the ES using the Kibana query as there in the below file.
2. DSS ingest service API
3. Ingest service configurations
Transform collection schema for V2
This transform collection v1 configuration file is used to map the incoming data. This mapped data will go inside the data object in the DSS collection v2 index.
Here: $i, the variable value that gets incremented for the number of records of paymentDetails.
$j, the variable value that gets incremented for the number of records of billDetails.
Enrichment Domain Configuration
This configuration defines and directs the Enrichment Process that the data goes through.
For example, if the data which is incoming is belonging to a Collection Module data, then the Collection Domain config is picked. And based on the Business Type specified in the data, the right config is picked.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
Topic Context Configuration
Topic Context Configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service is asked to receive those data and pass it through the pipeline, the context and the version of the data being received have to be set. This configuration is used to identify as in which Kafka topic consumed the data and what is the mapping for that.
JOLT Domain Transformation Schema
JOLT is a JSON to JSON Transformation Library. In order to change the structure of the data and transform it in a generic way, JOLT has been used.
While the transformation schemas are written for each Data Context, the data is transformed against the schema to obtain transformed data.
Validator Schema
Validator Schema is a configuration Schema Library from Everit Bypassing the data against this schema, it ensures that the data abides by the rules and requirements of the schema which has been defined.
Enhance Domain configuration
This configuration defines and directs the Enrichment Process that the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked and the final data is placed inside the domain object.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
For the Kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
If DSS collection index data is indexing directly ( without Kafka connector) to ES through the ingest pipeline then, make the application properties or in environments, direct push must be enabled.
es.push.direct=true
4. Creating a Kafka sync connector to push the data to the Elasticsearch
Configure the Kafka topics in the environments or Ingest pipeline application properties as shown below.
To start the indexing we will create a connecter that will take data from the topic and push it to the index we have mentioned in the "transforms.TopicNameRouter.replacement" and mention the ES host in the Kafka connection we have to mention the host URL in “connection.url”.
To create the Kafka connector run the below curl command inside the playground pod:
Description | Link |
---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Micro Service which runs as a pipeline and manages to validate, transform and enrich the incoming data and pushes the same to ElasticSearch Index. Ingest service will fetch the data from the index (paymentsindex-v1) which is specified in the indexing service API as below. The ingest service will read the configuration files which are there with v1. All the configuration files will be there .
Description | Link |
---|
All content on this page by is licensed under a .
DSS Backend Configuration Manual
DSS Dashboard - Technical Document for UI
DSS Technical Documentation
DSS Backend Configuration Manual |
DSS Dashboard - Technical Document for UI |
DSS Technical Documentation |