Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following those principles
Always define the Yaml for your APIs as the first thing using Open API 3 Standard (https://swagger.io/specification/)
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in the POST body only
If query params for search need to be supported then make sure to have the same parameters in POST body also and POST body should take priority over query params
Provide additional Details objects for _create and _update APIs so that custom requirements use these fields
Each API should have a RequestInfo object in request body at the top level
Each API should have a ResponseInfo object in response body at the top level
Mandatory fields should be minimum for the APIs.
minLength and maxLength should be defined for each attribute
Read-only fields should be called out
Use common models already available in the platform in your APIs. Ex -
For receiving files in an API, don’t use binary file data. Instead, accept the file store ids
If there is only one file to be uploaded and no persistence is needed, and no additional json data is to be posted, you can consider using direct file upload instead of using filestore id
Details coming soon...
Kafka producer publishes messages on a given topic. Kafka Consumer is a program, which consumes the published messages from the producer. The consumer consumes the message by subscribing to the topic. A single consumer can subscribe to multiple topics. Whenever the topic receives a new message it can process that message by calling defined functions. The following snippet is a sample of code which defines a class called TradeLicenseConsumer which contains a function called listen() which is subscribed to save-tl-tradelicense topic and calls a function to generate notification whenever any new message is received by the consumer.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
@Slf4j @Component public class TradeLicenseConsumer { private TLNotificationService notificationService; @Autowired public TradeLicenseConsumer(TLNotificationService notificationService) { this.notificationService = notificationService; } @KafkaListener(topics = {"save-tl-tradelicense") public void listen(final HashMap<String, Object> record, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { notificationService.sendNotification(record); } }
@KafkaListener annotation is used to create a consumer. Whenever any function has this annotation on it, it will act as kafka consumer and the message will go through the flow defined inside of this function. The topic name should be picked up from the application.properties file. This can be done as showed below:1
@KafkaListener(topics = {"${persister.update.tradelicense.topic}")
where persister.update.tradelicense.topic is the key for the topic name in the application.properties
Whenever any new message is published on this topic the message will be consumed by the listen() function and will call the function sendNotification() with the message as the argument. The deserialization is controlled by the following two properties in the application.properties:1 2
spring.kafka.consumer.value-deserializer spring.kafka.consumer.key-deserializer
The first property sets the deserializer for value while the second one sets it for the key. Depending on the deserializer we have set we can expect the argument in that format in our consumer function. For example, if we set the value deserializer to HashMapDeserializer and key deserializer to string like below:1 2
spring.kafka.consumer.value-deserializer=org.egov.tracer.kafka.deserializer.HashMapDeserializer spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
Then we can write our consumer function expecting HashMap as an argument like below:1
public void listen(final HashMap<String, Object> record){...}
This section contains details on how to customize the DIGIT user interface and services to meet the local government or user requirements effectively.
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...
Details coming soon...