0

I am working on a java based usecase ( target is to make an microservice for this ) in which concurrent debits and credits are occurring for a bank account. I want to make this operation thread safe with least possible SLA. In java there are three ways to achieve this by synchronised() blocks, ReadWriteLock and StampedLock. But for all these options Threads have to wait for availability of Lock. So if number of threads increase in future ,SLA of balance update API will increase.

is it any way to reduce this Lock dependency perfectly? I have thought of one design which is based on kafka. Before I proceed further on it,I want to know view of experts here on my approach. Please comment/suggest whether it will be effective solution or not.

Here is my approach:

(1) Payment producer posts Payment (say credit of $100) as Topic to kafka (properly replicated and configured to make sure No data loss)

(2) On successful Topic registration in Kafka, debit money (- $100) from Sender account and send successful transaction confirmation to Payment producer.

(3) Now Payment consumer read Payment Topic (credit of $100) and credit money (+ $100) to Receiver account.

In this approach Producers and consumers are not waiting for any lock so I believe balance update operation will remain efficient even if number of producer threads increase.

Please comment which option is better to use for this usecase?.

DistributedAI
  • 193
  • 1
  • 9
  • when Payment consumer reads Payment Topic, it first have to aquire some lock anyway. – Alexei Kaigorodov Jan 09 '19 at 15:11
  • "Payment producer posts Payment (say credit of $100) as Topic" are you planning to create a new topic for every Payment? – Karan Khanna Jan 09 '19 at 16:00
  • 1
    I will say you should use a single transaction for credit and debit so that it can be atomic. Either it completes as a whole or fails as a whole because of any reasons. – Karan Khanna Jan 09 '19 at 16:05
  • You can use Zookeeper for locks. Not sure what Kafka has to do with that. You might want to see few examples of modeling payment events in this book https://www.confluent.io/stream-processing/ – OneCricketeer Jan 09 '19 at 19:18
  • @KaranKhanna Thanks for your input. No all transactions will use same topic ( say Payment ) with different transaction id so them consumer can identify them uniquely and process. – DistributedAI Jan 10 '19 at 08:57
  • @KaranKhanna basically the problem is I want to reduce SLA of the payment processing flow/ method . We have found that if payment volume is very high that we are facing slowness in API and customers are switching to another Funding Instruments for payments ( so it is a business loss for us). If I handle with single transaction then problem is going to remain same if payment volume increases significantly. Basically I thought by using Kafka here I can avoid database call for latest balance in account. – DistributedAI Jan 10 '19 at 09:04
  • @Rohit I get the problem, but if you segregate the two you might land in a situation where you may have some irregularities between credit and debit since the two are independent processes. I would suggest to look into the db operations for optimizations. – Karan Khanna Jan 10 '19 at 10:18
  • When it comes to transactional operations (Involving debit and credit in asynchronous fashion, your application should be ACID compliant.i.e Atomic in nature, should by all means ensure consistency, the operations should be Isolated, and ofcourse the solution needs to be durable. Maybe you should consider Sharding your data layer as a way of optimizing your reads and writes, scale horizontally is another option. I would go for sharding. – Amos Kosgei Jan 25 '19 at 10:09

1 Answers1

4

My view on this is rather different, the importance of having asynchronous processing is not necessary when you want consistency in your records.Deffered writes could help, but again there is a possibility for you to loose accuracy of your data. I would propose a lock mechanism at the data layer i.e optimistic lock on writes, thereby assuring that you have an element of select for update feature in your implementation. This not only ensures data integrity, but also is a neat way of doing this without a lot of boiler plate code and in my view, unecessary use of variant technologies to try to solve the problem you have. When it comes to money, optimistic or pessimistic locks are your best bets.

Kafka like any other queuing mechanism only serves to transfer load from the application allowing them to be queued elsewhere other than the application memory that would have otherwise hogged up the Heapspace. Kafka and other AMQP protocols are good when you want speed and dont necessarilly worry about which thread operates on which record. For your use case I am not certain this will fix your problem, it may on the contrary introduce unnecessary complexity to your application.

Amos Kosgei
  • 877
  • 8
  • 14