-1

I am working on an API layer which serves requests from a backend database. So the requirements are:

  1. Repopulate the whole table without downtime for API service: A main requirement for the API is that we should be able to re-populate the tables( 2to 3 tables, structured csv like data) in backend database periodically (bi-weekly or monthly), but the API service should not go down.
  2. low latency globally in the order of 100s of millisecond
  3. scalability with requests per second
  4. rate limit clients
  5. also switch to previous versions of the tables in the backend in case of issues.

My questions are about what kinds of AWS database that i can use and other AWS components which can achieve the above goals.

user2715182
  • 653
  • 2
  • 10
  • 23

1 Answers1

1

If you want a secure, low latency global API, then I would go with an edge optimized API Gateway API.

Documentation is here API GW limits regarding the maximum requests per second.

You can rate limit clients using API GW. Also, you can have different stages in API GW that correspond to different aliases in lambda. Lambda would be your serverless compute layer that would handle your API GW requests and in turn query your database. Using versioning and aliasing in lambda would allow you to switch to different database tables. Given that you are planning to use csv like data, you could go with RDS and use the Aurora engine, which is compliant with MySQL and PostgreSQL, and is an extremely cost effective option.

As some additional information, you should use lambda proxy integration between your API GW APIs and your lambda functions. This allows you to enable Identity and Access Management (IAM) for your APIs.

Documentation on Lambda proxy integration: Lambda proxy integration

Here is some documentation on Lambda: AWS Lambda versioning and aliases

Here is some documentation on RDS Aurora: AWS RDS Aurora

Chris D'Englere
  • 406
  • 2
  • 7