0

We currently have an API and database in Australia, and are attempting to reduce latency in other countries.

Coming to grips with the CAP theorem for synchronized databases is probably a little out-of-scope at present, but we're looking into horizontal scaling across several regions (e.g, have servers in US/EU/Asia).

Now, where I'm scratching my head is, would this approach yield any latency benefit? There's obvious benefits in having a server nearer to the user, but in exchange, the database (being still in Australia) is now far further away.

I hope this all makes sense, I'm pretty new to devOps kinda stuff.

MitchEff
  • 101
  • 1

2 Answers2

1

It mainly depends on your use case. For example, often the distribution of database writes vs database reads is sharper than 1:10. Also the API endpoints for reading are often used more than for create/update/delete operations. In this case, you can implement local caching for certain queries to decrease the response time.

Also, it's quite easy to set up multi-region read replicas for your database. This is supported by many hosted databases (MongoDB Atlas, Amazon RDS, ...).

0

I believe some context is missing from the question.

I assume that you mean that you would like to set up API servers in the (US/EU/Asia).

This very much depends on what the API is actually is doing.

If you can cache at the API server, then you can get a much better latency.

g_bor
  • 276
  • 1
  • 9