menu

arrow_back Is it correct to copy data between microservices?

by
3 votes
Hi all.
Our project uses a microservice architecture (at least we think so). We have an application to manage some entities. Let's take a movie list as an example. The communication between the applications is both synchronous (HTTP, RPC) and asynchronous using events and commands.
Suppose we have microservices that perform the following functionality:
Movie management API
Payment
Outputting a video stream
Making recommendations Each microservice needs different information about the movie. Each microservice needs different information about the movie.
I was considering putting the movie information into a cache (Redis, Memcached) and getting it from there. The problem here is that you lose flexibility - if you change the storage structure of the movie information, you have to update all the consumers of that data.

Well, the 3rd option I see is to synchronously request the API for movie management.

What other options are there for properly organizing this kind of data management?

2 Comments

What other options are there for properly organizing this kind of data management?
Are you sure you need microservices?

3 Answers

by
0 votes
Each microservice needs different information about the movie.
Perhaps enough information needs to be sent through the message broker to make each service work. Then you don't need to go to the other one for data.

They take the necessary data and save it to a local database and work with it. This can cause a lot of problems in the form of data inconsistency, etc.
We need to clarify the reasons for the inconsistency and other problems. Otherwise, how can you help without knowing the problem?

The Event Sourcing pattern says that all changes in the state of the application must be represented as a sequence of events.
I don't know what message broker you use, but in order to be able to reconstruct the whole sequence of changes all over again in each microservice, you need to initially store them in some central event log. Apache Kafka is quite suitable for this purpose.
If not Kafka, you need to ensure that you can send the same event through multiple channels, so that each of the microservices can get all the information they need.
Changes to the DBMS must occur atomically to avoid data inconsistency.

Event Source Template
https://martinfowler.com/eaaDev/EventSourcing.html
https://microservices.io/patterns/data/event-sourc...

2 Comments

Ruslan Fedoseenko Without knowing the details, I can't help.
Broker RabbitMQ.
Well, our events are automatically rotated to all listeners.
The inconsistency arises more because of the load. Roughly speaking, they changed the price of the movie in the data copy, the price did not have time to update, and the subscriber pays the old price.
by
0 votes
There must be one database for writing. On read, you can make a replication.
Accordingly, according to this scheme, each service that can work relatively independently.

A news site, for example, may have:
Users
Articles
Comments Off on
Banners Any of these lists can be subdivided into additional services, read-write. In summary, you should survive the failure of the master server (writing) of any of the modules as painlessly as possible.
by
0 votes
1. All data is stored in one database.
2. The base has a control system with an API. If you change the schema, you change the API device, but not its external interface.
3. All data consumers and generators use this common API. They use it flexibly, for example, through GraphQL. (Personally, I lack its flexibility, I wrote my own.) If the microservice changes, the request is rewritten, not the API.

2 Comments

Roman Mirr are you talking about functional microservices or storage microservices?
All data is stored in one database.
This is true for monoliths, but not for microservices.