0

I am currently running my Laravel application on AWS Beanstalk with a load balancer configured. The setup allows for a minimum of 1 and a maximum of 2 instances to be running. So far, everything functions as expected, with the load balancer adding a second instance when the load on the first one becomes too high.

However, my concern arises when the second instance gets terminated. In my .platform configuration, the supervisor is set up and consequently, the second instance also starts running the workers. What will happen to these workers that are still processing jobs when their instance gets terminated? Would it be more feasible to separate the workers into another instance?

Unfortunately, I have not been able to test or replicate this situation. However, my assumption is that the system should wait for the supervisor to complete its processes before terminating the instance.

oralunal
  • 393
  • 3
  • 16
  • 1
    This is a "very" opinionated answer honestly, but I will still share my experience. You should have an EC2 for web traffic (not API), another one for API traffic, and then another one for workers (jobs/listeners). If you have a lot of schedules in the background that could saturate another EC2, also create one just for that. Then, you can decide how many you want for each one, for example, web traffic and API traffic, have 1 each, but an LB with a max of 5 (for example), then jobs, say you have 100 workers in total, then I would have maybe 2 EC2 with 50 (depends on resources), etc. – matiaslauriti Jul 24 '23 at 14:26
  • Okay, I thought you solution, too. So, there are something that i have to figure out with your. Beanstalkd + CodePipeline was completely managing my deployment process. With this solution I have to setup a worker instance and make things updated automatically. – oralunal Jul 24 '23 at 20:28
  • I have never used Beanstalkd and CodePipeline, I do know what they are, but never used them – matiaslauriti Jul 24 '23 at 21:57

1 Answers1

0

I have implemented a solution for the problem at hand. However, I'm not completely certain if this is the best approach.

Initially, I created two environment properties for HTTP_SERVER. However, one of these will be used for HTTP_SERVER and the other for the Worker Environment. Then, from the environment settings, I set up two "Environment Properties". These properties will determine whether the current deployment is live or in a test environment, and whether it's an HTTP server or a worker.

enter image description here

Depending on these settings, I've created bash scripts to control the operations.

Here's a snippet from the script I prepared for prebuild:

SERVER_VER=$(/opt/elasticbeanstalk/bin/get-config environment -k SERVER_VER); # RELEASE, LIVE
SERVER_TYPE=$(/opt/elasticbeanstalk/bin/get-config environment -k SERVER_TYPE); # WORKER, HTTP

if [[ $SERVER_TYPE == "WORKER" ]]; then
    # Install supervisor
    APP=$(yum info supervisor 2>/dev/null | grep Repo | awk '{ print $3 }')

    # Check if supervisor is installed
    if [[ $APP == "installed" ]]; then
            sudo supervisorctl stop all # Stop all supervisor processes
    else
            sudo amazon-linux-extras enable epel
            sudo yum install -y epel-release
            sudo yum -y update
            sudo yum -y install supervisor
            sudo systemctl start supervisord
            sudo systemctl enable supervisord
    fi

    # Supervisor config
    sudo cp .platform/files/supervisor.ini /etc/supervisord.d/laravel.ini
    sudo supervisorctl reread
    sudo supervisorctl update

    # Restart supervisor
    sudo supervisorctl stop all # Stop all supervisor processes
fi

You can retrieve the Environment Properties you have defined with the /opt/elasticbeanstalk/bin/get-config environment command. If you want to get a specific one, you can use the /opt/elasticbeanstalk/bin/get-config environment -k SERVER_VER command.

oralunal
  • 393
  • 3
  • 16