0

I followed this kubernetes example to create a wordpress and mysql with persistent data

I followed everything from the tutorial from creation of the disk to deployment and on the first try deletion as well

1st try

https://s3-ap-southeast-2.amazonaws.com/dorward/2017/04/git-cmd_2017-04-03_08-25-33.png

Problem: persistent volumes does not bind to the persistent volume claim. It remains at pending status both for the creation of the pod and the volume claim. Volume status remains at Released state as well.

Had to delete everything as describe in the example and try again. This time I mounted the created volumes to an instance in the cluster, formatted the disk using ext4 fs then unmounted the disks.

2nd try

https://s3-ap-southeast-2.amazonaws.com/dorward/2017/04/git-cmd_2017-04-03_08-26-21.png

Problem: After formatting the volumes, they are now bound to the claims yay! unfortunately mysql pod doesn't run with status crashLoopback off. Eventually the wordpress pod crashed as well.

https://s3-ap-southeast-2.amazonaws.com/dorward/2017/04/git-cmd_2017-04-03_08-27-22.png

Did anyone else experience this? I'm wondering if I did something wrong or if something has changed from the write up of the exam til now that made the example break. How do I go around fixing it?

Any help is appreciated.

1 Answers1

1

Get logs for pods:

kubectl logs pod-name

If log indicates the pods are not even starting (crashloopback) investigate the events in k8s:

kubectl get events

The event log indicates the node running out of memory (OOM):

    LASTSEEN   FIRSTSEEN   COUNT     NAME                                              KIND      SUBOBJECT                    TYPE      REASON       SOURCE                                                      MESSAGE
1m         7d          1555      gke-hostgeniuscom-au-default-pool-xxxh   Node                                   Warning   SystemOOM    {kubelet gke-hostgeniuscom-au-default-pool-xxxxxf-qmjh}   System OOM encountered

Trying a larger instance size should solve the issue.

Oswin Noetzelmann
  • 9,166
  • 1
  • 33
  • 46
  • Here is the most recent log for both pods it doesn't look like I was able to retrieve the log for mysql https://gist.github.com/dorwardv/3e316bb50745e46d83e0133415aebeee However i got a log before it crashed while the database was initializing https://gist.github.com/dorwardv/7aa0bb21986acb3a3ff338b8d81aeb0f I hope this helps. Thanks Oswin – Dorward Villaruz Apr 03 '17 at 01:44
  • The wordpress log indicates that it fails because of missing sql connection. So we need to find out why the mysql pods did not even start. Possibly an error with attaching the storage still. Can you run the following: kubectl get events and post the output? – Oswin Noetzelmann Apr 03 '17 at 02:04
  • https://gist.github.com/dorwardv/c13ce62617009649a616bd39a8b786c5 It looks like OOMs Are my instances not big enough to cater the pods? they are f1-micros. seems strange as tutum/wordpress is able to run a mysql/apache service on an f1-micro just fine. – Dorward Villaruz Apr 03 '17 at 02:27
  • I will try with a beefier instances in a cluster and will post back here – Dorward Villaruz Apr 03 '17 at 02:51
  • Thanks Oswin I was able to make it work using a g1-small instance. – Dorward Villaruz Apr 04 '17 at 05:42