I have pretty low configuration testing machine for my data pipelines developed in Spark. I will use only one AWS t2.large instance, which has only 2 CPUs and 8 GB of RAM.
I need to run 2 spark streaming jobs, as well as leave some memory and CPU power for occasionally testing batch jobs.
So I have master and one worker, which are on the same machine.
I have some general questions: 1) How many executors can run per one worker? I know that default is one, but does it make sense to change this?
2) Can one executor execute multiple applications, or one executor is dedicated only to one application?
3) Is a way to make this work, to set memory that application can use in configuration file, or when I create spark context?
Thank you