0

We have a Wordpress site hosted on an Amazon Web Services EC2 instance using elastic load balancing. I'm running into a frustrating problem where it seems like every time I refresh the browser, sometimes it shows the updated site and sometimes it shows an older, cached version of the site. That goes for content edits made on the frontpage, changes to wp-config, and changes to .htaccess and httpd.conf.

Example 1: I added a subdomain in httpd.conf and sometimes I'll go to subdomain.example.com and it'll show the test file I put into the subdomain folder, but other times I'm redirected to the main website (DNS has a wildcard subdomain pointing to the ELB A record)

Example 2: I added a force-www redirect in .htaccess and sometimes it redirects, sometimes it doesn't.

Example 3: The front page sometimes shows the main three images we switched the homepage to, but sometimes shows the main three images from last month.

Example 4: I changed a wp-config file to say site-url="www.site.com" and home="www.site.com" (previously was site-url="site.com" and home="www.site.com"), but when I go to WP's settings page, sometimes it shows my changes, but other times it shows the old configuration.

I've tried everything from clearing the browser cache/cookies, using different browsers, using different computers from different locations, restarting httpd - the problem persists. I have a feeling it has something to do with the ELB and several instances caching?

Thanks in advance!

  • Are you updating all the instances or just one? Running a WordPress site on multiple EC2 instances behind a load balancer requires a number of specific configurations, including a shared filesystem of some sort for the uploads folder, a shared database, etc. Any changes you make to files need to be made on all of the instances. – ceejayoz Jul 15 '16 at 16:51
  • @ceejayoz I'm just updating one EC2 instance. The person before me set up this system so I'm still trying to work everything out. It seems like we have an RDS database but all the wordpress files are just in that one EC2 instance... – user365379 Jul 15 '16 at 18:12
  • I'm confused, then. You mention "several instances" in the question. The behavior you describe is consistent with multiple instances behind the ELB. – ceejayoz Jul 15 '16 at 18:17
  • @ceejayoz There are multiple instances that get deployed and terminated throughout the day (stretches from 1 instance to 10) and one main instance that's labeled with a "do not delete" - that's the instance that I've been working from. Sorry, I wasn't familiar with AWS at all prior to this, so I'm figuring things out. It seems like WP wasn't installed correctly for this type of setup? – user365379 Jul 15 '16 at 18:37
  • 1
    None of us can speak to that. Ask the person who built it. Maybe they've got some every-15-minutes rsync setup or something else convoluted. – ceejayoz Jul 15 '16 at 18:39
  • @ceejayoz Your first comment got me looking at how the different instances are launched and I realized I needed to create a new AMI from the instance I was working on and launch the new instances from that AMI. I think it's working now, thanks! I unfortunately can't get in touch with the person who put it together. – user365379 Jul 15 '16 at 19:51

1 Answers1

1

As you've worked out, you have a load balancer and auto scaling creating different instances of the server from an AMI. That means some of them will be using a server that has old information. Since you're using RDS your text and configuration should be shared.

Creating an AMI on each change isn't practical. I suggest you look into Amazon Elastic File System, you could store the entire Wordpress install on it (probably), or you could just store the wp-content directory. You'd create a new AMI that maps that EFS disk, and you would never have to alter it.

If you don't understand AWS well enough there are plenty of people who do who you could hire. It's not a particularly difficult task.

Tim
  • 31,888
  • 7
  • 52
  • 78