1

I have a very scary problem. The last couple of times I have pushed code to production, silverstripe will set a table or two as obsolete, even if the changes made are unrelated to that class. When I run a build for a second time the table is back but with no rows.

The really odd thing is that this only seems to happen on our production environment (of course).

On staging and production we are running sake dev/build in a post deployment hook through beanstalk which is when the obsolete tables are being created.

I read in another question that this could be because the table doesn't have a $db defined or it doesn't have a $has_one relationship. But that is not the case for us, the page has both set and more.

Server configuration:

SilverStripe version 3.1 (up to date)

PHP

Dev 5.6.16

Staging 5.5.14

Production 5.5.28

Mysql

Dev 5.6.27

Staging 5.1.73

Production 5.1.73

It sounds to me like it could be a config cache of some sort.

I am not sure what other information is required to diagnose just let me know and I will get additional info.

Community
  • 1
  • 1
nickspiel
  • 5,170
  • 6
  • 33
  • 48
  • Is there anything special about production such as load balancing? – irogue Dec 10 '15 at 21:58
  • Nah nothing like that. – nickspiel Dec 10 '15 at 22:01
  • Is it always the same few page types that are being obsoleted? – irogue Dec 10 '15 at 22:32
  • Yeah, there are 2 pages, they have no relationship to each other but are both reasonably new. – nickspiel Dec 10 '15 at 22:36
  • 1
    For me this sounds as a casing issue on the filesystem or the db level. E.g it cant map the table name to a class so it removes it and on the next build it creates it back again. I'm not on my work computer to confirm this. Is there production and stating both using case sensitive filesystem or something minute differencies in the mysql configurations? – Olli Tyynelä Dec 12 '15 at 14:19
  • Also id compare the filesystem of the servers are there any differencies on these particular files. – Olli Tyynelä Dec 12 '15 at 14:29
  • 1
    If those don't help then id suggest on cloning if possible the staging server and just start removing files until you get it to build without issues. If you cant clone it easily get this https://github.com/BetterBrief/vagrant-skeleton/releases/tag/v1.0.0 and making it match the staging server as close as possible. But don't run the ss from the shared folder. The share is case insensitive so you might not be able to replicate the issue. E.g change the vhost to map some other folder inside the vagrant box. – Olli Tyynelä Dec 12 '15 at 14:35
  • We are about to do this. Will let you know when we nut out what the issue is. – nickspiel Dec 14 '15 at 21:22
  • So it looks like it may be a combination of things. Beanstalk was not clearing out files and there were inconsistencies all over the place. We have deployed from scratch and things are working as expected. I will monitor it for a couple of deployments to make sure. Thanks for your help on this guys. – nickspiel Dec 16 '15 at 06:06
  • @nickspiel You should definitely write up a followup answer and mark it as the one that closes this question. Answer to this could imho be short and sweet: "dev/build had issues due to duplicate files caused by failed automated deployment". – Olli Tyynelä Dec 28 '15 at 16:05

1 Answers1

1

I am not sure exactly what was causing this but it looks like our automated deployment process had left behind a couple of directories and files. We have moved to composer, deployed from scratch on the affected projects and everything is behaving now.

nickspiel
  • 5,170
  • 6
  • 33
  • 48