1

I have an email list of people I have to send out emails to every morning. The number is about 1000 at the moment and it is on my own VPS LINUX server.

The problem is that when the job is running the websites I have on the server seem to hang and timeout quite a lot.

I thought adding a sleep command of a couple of seconds between each iteration of the loop would help this but a PHP developer at my work just told me that this wouldn't do anything to help Apache in regards to memory and it would be better to just run the script without any sleep command.

I have read articles on here and other sites where people recommend adding a sleep command in between the loop iterations but if this just means that the Apache process consumes more memory and causes my site to hang then I don't want that to happen.

I am calling this job from my domain from a CRON job with a secure hash. The reason I am calling it externally through my site e.g http://www.example.com/sendemails.php?secretcode=_--1033-449 is that often I need to debug it to see what is going on or if the job hasn't run I can run it manually and read the debug on the page.

Is there an issue with Sleep and Apache / website timeouts or should I do it in another way e.g having some sort of batch of starting the script every 10 minutes and sending out 100 emails before updating a flag in the DB against the person so I know they have had an email sent (True/False) and starting from that point in the next batch.

Then at night after a certain time e.g 6pm when I know no emails should be sent out I could easily re-set all their flags to False ready for tomorrow. Ensuring no emails are sent after 6pm of course.

What is the preferred way of doing things like this as it does seem that the website hangs/times out during the time the job runs.

Thanks

Rob

MonkeyMagix
  • 677
  • 2
  • 10
  • 30
  • I can't even send 10 emails if I don't put a `sleep(1)` between them, it may not be the best solution but this way I'm able to send 2k+ emails at once – gbestard May 19 '15 at 13:17
  • Are the contents of the emails plain text? Or are they the result of one or more DB queries that need to run, filter a result set, and then send dynamic or meta-data information? Emails usually send very fast unless something else is bottlenecking the process. – Adam T May 19 '15 at 13:19
  • You could use array chunking http://php.net/manual/en/function.array-chunk.php if you work it into your code. – Adam T May 19 '15 at 13:21
  • @AdamT In my case they were dynamic emails – gbestard May 19 '15 at 13:21
  • @MonkeyMagix I would then take a look at how long each dynamic process takes. Or, for a test, create a plain-text email and send it to the same list of recipients (of course making it somewhat of value to the recipients) and see if the performance increases. – Adam T May 19 '15 at 13:25
  • Another possible approach is to do All the DB querying in 1 shot, storing the results in an associative array, and then using the key/values of said array as the front-end feeder to your emailing functionality. It would save performance since the Read Actions from the DB is done only once instead of each time per email recipient. – Adam T May 19 '15 at 13:26
  • The emails are HTML. I obtain one post from the DB and use it to build a template string of HTML with ##NAME## placeholders which I store in a variable (so I only get this once from the DB). Then I loop through a list of people from another table just obtaining their name, email and unsubscribe hash url. I then replace the ##NAME## placeholders in my string and send the email with PHPMailer. I didn't know the MySQL I used would effect performance, currently using while($row2 = mysql_fetch_object($res2)) and $name = $row2->name; Should I change this? – MonkeyMagix May 19 '15 at 14:14

1 Answers1

0

It seems that in my case anyway having NO SLEEP in between the loop iteration works a whole lot better than having a 1 second sleep

I have warning emails set up on my VPS and yesterday during the sending of the batch I received one saying timeouts were occurring from specific locations (Rackspace does 3 tests from London, Chicago and Dallas), 2 locations returned times of over 25 seconds to connect to my server - not an HTTP PING but just to connect with a normal PING.

I send success emails at the end of the batch job to tell me what happened, when the job started and finished and how many emails were sent out.

This was yesterday's email with a 1 second sleep between each iteration in the loop.

Date of Job Starting: 2015-May-19 11:15:02 We successfully sent out 945 emails to subscribers Date of Job Finishing: 2015-May-19 11:50:54

So with a 1 second delay and calling the PHP script by an APACHE process showed that it took 35 minutes to send out the emails.

Plus I got website / server issues during the job.

Checking my debug script showed that there gaps longer than 1 second between the sending of a lot of emails so the job was obviously causing issues with the server.

However today I had no delay between each loop iteration and the job completed within a minute.

I sent all emails out successfully with no warnings from my VPS. My VPS does both Connection PINGS and HTTP Pings (HEAD Requests) to my server.

Date of Job Starting: 2015-May-20 11:16:01 We successfully sent out 945 emails to subscribers Date of Job Finishing: 2015-May-20 11:16:54

I guess this shows that you can send 1000 emails in less than a minute. #

This is with a standard MySQL loop to get the email/name of the person to send the email to from the database and using an Apache process e.g a CRON call to a webpage which holds the script rather than an internal call to the PHP file.

The script is configurable so I can set the sleep above 0 to do a wait and all debug messages are stored in an array during the job and then piped out in one file_put_contents call to my debug file at the end. This is rather than a constant opening and closing of the debug file as I have found this is always a performance killer.

So I guess the answer FOR ME at least is to remove the SLEEP and just get the job done as quick as possible so there is no build up of Apache connections waiting to use the server as the script runs.

If I do get to the stage where I have issues I am going to move the CRON job to an internal CRON call to the PHP job so all APACHE processes are free for use. However the speed is the key issue.

Thanks for your ideas though.

MonkeyMagix
  • 677
  • 2
  • 10
  • 30