0

Working on a deploy script, I ran into issues that I cannot explain. We have a production server (Ubuntu server 10.04), a repository (Gitlab), and a development server (Ubuntu 14.04). I work on the development server, and from there run the deploy script which, after removing everything else, does only this:

git archive --format=tgz $MASTER | ssh $HOST "mkdir $NEW && tar xzf - -C $NEW || echo Could not create folder"

The folder gets created properly. The files seem correct. But Apache spits various errors, usually relating to undeclared classes or variables, but different at each attempt. Moving the new folder out of the way and moving the old folder back into place works fine. Moving the old folder out of the way and moving the new folder back into place breaks. There is no noticeable difference between the two folders.

Then things get interesting: - If I use the new folder, it breaks. If I move various subfolders from the old folder into the new folder, it eventually works. But, no single folder is sufficient, and which folders help is not consistent. It seems to be some sort of "critical mass" - when enough of the old files are there, it works. Running ls -la shows the same files with the same sizes / ownerships / permissions. Running diff -r shows no difference between the folders / files. - If I use the old folder, it works. If I then move some subfolders / files from the new folder into old, it eventually breaks. The behavior is similar to the previous point. - If I move both old and new out of the way, and use cp -a to make an exact duplicate of the folder, it works fine. (EDIT: Not always. I just made a test were original failed, copy failed differently, copy of copy worked.)

So the files are identical, but they are bad somehow. If have also seen cases where using vi to save a file changed the behavior, i.e. seemingly "cleaned" the file. I don't understand what could possibly be wrong with the files, but invisible to the eye - and perfectly fine according to all software besides Apache/PHP.

What could possibly be wrong with the files that only Apache would pick up? What commands could reveal those?

Franklin Piat
  • 806
  • 8
  • 24
  • What happens if you archive to a file.copy the file manually and then unpack it ? – user9517 Mar 13 '15 at 06:47
  • You mean tar the new folder into an archive then extract it? The extracted folder seems to have the same behavior: seemingly random errors, mostly related to undeclared classes or functions. – Guilhem Malfre Mar 13 '15 at 07:00
  • No instrad of piping it through ssh get git to write the archive to a file – user9517 Mar 13 '15 at 07:01
  • Thanks for looking into this. If I replace the tar line with something along the lines of `git archive --format=tar --prefix=$PROJECT.new/ $MASTER | gzip > ~/deploy.$PROJECT.tar.gz; scp -q ~/deploy.$PROJECT.tar.gz $HOST:~; ssh $HOST << EOF tar -xzf ~/deploy.$PROJECT.tar.gz -C $NEW; rm -f ~/deploy.$PROJECT.tar.gz; EOF` then it works. It adds several steps to the deploy script, takes more disk space, potentially overwrite files, and does not explain the behavior described above. – Guilhem Malfre Mar 13 '15 at 07:36
  • It was just the first step in diagnosing the problem. If you untar the archive on the source system and then use the failing process to send the same stuff to the destination then use `md5deep-r` on both to get a list of files and checksums to see what's changed. – user9517 Mar 13 '15 at 09:10
  • Using md5deep rather than diff to check for differences is an interesting idea - although if it found a difference it would not point out what it is. I will try that when I get a chance. – Guilhem Malfre Mar 13 '15 at 10:24

0 Answers0