This example I am stating obviously a rare scenario, but I would like to understand how this would be handled? (or they should be totally avoided?)
Consider there's two application server (AS1 and AS2) having their own databases(DB1 and DB2). An HTTP call to one AS1 will internally make HTTP call to AS2. Once AS2 validated the payload from AS1, it will write to DB2 and returns an hash, which AS1 will store in DB1.
Looks very simple to construct, but how can we handle this following scenario. Say, AS1's internal HTTP call to AS2 has a specific timeout (let's assume 15seconds). In that time period, AS2 writes to DB2 but didn't return before the timeout, so AS1 don't have the hash and now it cannot make a duplicate call to do AS2 again.
How should this be handled? I know the scenario looks vague for few of you. But let's pop up some discussion further if needed. Thanks.
Top comments (5)
Thanks Eric. I know I didn't provide a complete picture in here, but after discussing your suggestions with my teammates, we felt the second approach you suggested goes well with our current scenario! Thanks again Eric :))
For processes that you know specifically have a long processing time, you can use a combination of synchronous and asynchronous messaging and polling.
For example, when you complete a git merge in Bitbucket, the client (website) sends a request to the api server to merge the request.
The api server replies telling the client that it has received the request and is processing.
In the meantime, Bitbucket continuously polls a different API with a processID given to it in the prior request: each time asking if the job is done yet. Once it receives an affirmative, it updates the view.
This kind of scenario can be done server to server just as easily.
I will also vouch for a message queue solution as suggested by Erick as a great alternative.
Thanks Brandin. This is something new that I learnt today.
I’ve seen process forking and then utilizing a Linux signal alarm to enforce a timeout on the child process. Then what ever third party function you want to run can be the forked process. This could be a bad idea depending on things like mutations and state.
Mine is an uneducated opinion, but I like your second approach. It is very intuitive, the kind of logic I would expect from what appears to be a loosely coupled architecture.