I'm working on a Datalake project composed by many services : 1VPC (+ subnets, security groups, internet gateway, ...), S3 buckets, EMR cluster, Redshift, ElasticSearch, some Lambdas functions, API Gateway and RDS.
We can say that some resources are "static" as they will be created only once and will not change in the future, like : VPC + Subnets and S3 buckets
The other resources will change during the developement and production project lifecycle.
My question is what's the best way to manage the structure of the project ?
I first started this way :
-modules
.rds
.main.tf
.variables.tf
.output.tf
-emr
-redshift
-s3
-vpc
-elasticsearch
-lambda
-apigateway
.main.tf
.variables.tf
So this way i only have to do a terraform apply
and it deploys all the services.
The second option (i saw some developers using it) is that each service will be in a seperate folder and then we only go the folder of the service that we want to launch it and then execute terraform apply
We will be 2 to 4 developers on this project and some of us will only work on a seperate resources.
What strategy do you advice me to follow ? Or maybe you have other idea and best practice ?
Thanks for your help.