1

Warning : I'm new in application architecture and officially speaking - this is the first time I'm designing something this big. This is also my own application so I got full authority to change things.

I'm building a serverless application which consists of an on-demand application streaming platform.

Customers who seek to try a specific application (usually large and expensive ones like Photoshop or Solidworks for example) could have the possibility to directly try one from their computers, on their browser, while the application is running on a similar to their computer type of infrastructure.

I'd use the CI/CD pipelines and IaC technology to build EC2 infrastructure that will host these applications and use those same technologies to destroy that infrastructure, since it's volatile.

So to create/destroy that EC2 infrastructure I use the GitLab API.

I've thus decided to go with AWS Lambda & GitLab for now.

Now the architecture questions :

  1. Is it better to have one serverless function that handles everything or several functions ?
  2. I'm planning to destroy the EC2 infrastructure after a certain amount of time (10-15 minutes). How should I schedule HTTP communication? Should I use a queue like SQS? Should I use some database and check every minute?

Again, thanks a lot for your wisdom!

Edit : Clarification on some stuff.

Fares
  • 893
  • 1
  • 11
  • 24
  • Not sure I follow your question. You say you want to create a serverless architecture but then you mention an EC2 infrastracture. Also, could you explain your 2nd question a bit better? – Andre.IDK Sep 11 '20 at 14:53
  • Sure @Andre.IDK, the serverless application itself is, well, serverless. The EC2 infrastructure I'm mentioning is the infrastructure hosting all the applications customers want to try. To create or destroy that EC2 infrastructure, I trigger via the GitLab API CI/CD pipelines on GitLab that execute Terraform code. I was wondering the best way to schedule those HTTP requests. – Fares Sep 11 '20 at 14:56
  • Common practice in this sense is to use components/technologies that subscribe to certain git hooks. An option could be to have a Jenkins pipeline that subscribes to certain events (i.e. merge/push) and its executed every time any of these events happens on Gitlab. – Andre.IDK Sep 11 '20 at 14:59
  • I already do that. To give access to those EC2 infrastructure I need the DNS links and I get those via the webhooks at the end of the CI/CD pipeline executions. What I'm trying to ask is how do you schedule (on my side), HTTP requests towards the GitLab API to trigger the removal of the EC2 infrastructure after a specific amount of time. – Fares Sep 11 '20 at 15:01
  • If you are looking for 'opinions', it would be better to ask at: https://www.reddit.com/r/aws – John Rotenstein Sep 11 '20 at 23:23

0 Answers0