Apache Kafka should be used directly in inventory-storage for implementation of domain event pattern, which is needed for both remote-storage integration and new search module. This direct usage of Apache Kafka should be considered as a special case, as it was for data-import. Also it is special case, that storage module use more than one data storage. However, Kafka is used here not for data storage, but for event provision for interaction between modules.
- On every create message should be sent with id of the created entity as the partition key and body, which contains update action name ("CREATE") in message header and entity in "new"
- On every update the similar message should be sent but with action name ("UPDATE") and "old" and "new" state of the entity (see below)
- On every detele the similar message should be sent but with action name ("DELETE") and with json of entity, that was deleted in "old" and empty "new"
The Kafka integration should be done according to POC, which is made by Taras_Spashchenko.
The name of the topic should be inventory.entity_type .
"type" : "UPDATE"
NB: for each item instnace id should be sent, whih is taken from the corresponding holding (retriving of the holding has been already in place)