-1

In openshift3 i am hosting Java application. i had created MYSQL and deleted and now i am trying to create MYSQL again but it is showing error "THE CONTAINER MYSQL IS CRASHING FREQUENTLY". I attached images for reference. "SQLCrash_Image" "ConsoleOutput_Image" New images:- "Monitoring&Events_Image" "Logs_image" "Monitoring&Events_Image2" Can anyone help get this resolved?

LOGS:

D:\Openshift ocCommands>oc get pods
NAME             READY     STATUS             RESTARTS   AGE
mysql-2-deploy   0/1       Error              0          9h
mysql-3-9rmt3    0/1       CrashLoopBackOff   4          2m
mysql-3-deploy   1/1       Running            0          2m

D:\Openshift ocCommands>oc logs mysql-3-9rmt3
error: Invalid MySQL username
You must either specify the following environment variables:
MYSQL_USER (regex: '^[a-zA-Z0-9_]+$')
MYSQL_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$')
MYSQL_DATABASE (regex: '^[a-zA-Z0-9_]+$')
Or the following environment variable:
MYSQL_ROOT_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$')
Or both.
Optional Settings:
MYSQL_LOWER_CASE_TABLE_NAMES (default: 0)
MYSQL_LOG_QUERIES_ENABLED (default: 0)
MYSQL_MAX_CONNECTIONS (default: 151)
MYSQL_FT_MIN_WORD_LEN (default: 4)
MYSQL_FT_MAX_WORD_LEN (default: 20)
MYSQL_AIO (default: 1)
MYSQL_KEY_BUFFER_SIZE (default: 32M or 10% of available memory)
MYSQL_MAX_ALLOWED_PACKET (default: 200M)
MYSQL_TABLE_OPEN_CACHE (default: 400)
MYSQL_SORT_BUFFER_SIZE (default: 256K)
MYSQL_READ_BUFFER_SIZE (default: 8M or 5% of available memory)
MYSQL_INNODB_BUFFER_POOL_SIZE (default: 32M or 50% of available memory)
MYSQL_INNODB_LOG_FILE_SIZE (default: 8M or 15% of available memory)
MYSQL_INNODB_LOG_BUFFER_SIZE (default: 8M or 15% of available memory)

 For more information, see https://github.com/sclorg/mysql-container
Karthik
  • 39
  • 5
  • 1
    Go to the logs for the pod and see what the error is if it is coming from MySQL. Also look at events under monitoring to see if that shows more detail error if relates to something outside of MySQL. – Graham Dumpleton Dec 09 '17 at 08:58
  • Hii Graham.Thanks for being in touch. I have added two more images in above question regarding "Monitoring&Events_Image", " Logs_image " for more detailed information. – Karthik Dec 10 '17 at 18:25
  • Please don't attach images. None of those images is really useful. You want the logs for the pod, not the deployment. Use 'oc logs' as Dave said to, not the web console. For the events, you needed to go into 'View events' to see details properly. Or use 'oc get events' on the command line and find those which are the error events. – Graham Dumpleton Dec 10 '17 at 20:06
  • As @GrahamDumpleton mentioned please include text output of logs, I've updated the answer with a very detailed guide of how to get the required logs. If you can follow the steps and let me know how it goes and update the question that'd be great! – Dave Kerr Dec 11 '17 at 02:56
  • Hii @GrahamDumpleton I have Updated my question with logs you could see it now – Karthik Dec 11 '17 at 04:26
  • And you haven't been able to work out what your problem is from that? You provided an invalid user name. Whatever you gave it had characters which aren't allowed. Edit the deployment configuration for the database and change the username value to something else. Should be safe to do this without needing to delete the app and starting over as database hasn't been set up yet. – Graham Dumpleton Dec 11 '17 at 04:44
  • Thanks @karthik I've added the solution under 'step 5', hope that helps! Please upvote and mark as the answer if this works for you :) – Dave Kerr Dec 11 '17 at 04:58
  • If MySQL is deployed from the service catalog in the web console, which uses a template, it shouldn't have been possible to not provide the environment variables for user, password and database name. This begs the question whether trying to use a third party MySQL image from Docker Hub which likely will not work for other reasons as well, as images from there don't usually follow best practices and expect to run as root rather than arbitrary user as required for OpenShift. Should deploy it from service catalog. – Graham Dumpleton Dec 11 '17 at 05:06

1 Answers1

1

How to diagnose crash loops

Here's the steps I would suggest following. If you can provide the output of the logs in each step that'll help us see the issue.

Step 1: Install the OC client

There is only so far you can go through the UI, you'll need the oc client to get into deeper troubleshooting.

First, log into your cluster through the web interface. Choose the question mark at the top right of the screen and select 'Command Line Tools':

Screenshot of CLI Instructions for OpenShift

Follow the instructions to download and install the client.

Step 2: Show the current pods

Once you've got the client and logged in, run:

oc get pods

Which should show a list of the pod names. Please paste the content of the output in your question (as text, not an image!)

Step 3: Find the crashing pod and get its logs

You'll have a pod which is crashing, which will be called something like mysql-2-6c009. We'll need the logs from it. Paste the output of:

oc logs mysql-2-6c009

Step 4: If you cannot find the pod, redeploy it

If you cannot see the pod any more because the deployment has failed, try running:

oc rollout latest mysql

Then run oc get pods again until you see the crashing pod.

Step 5: Solving the problem!

The logs show the issue, you are not specifying the environment variables required to set the database up properly. We can see the same if we check the docs on ]OpenShift - MySQL](https://docs.openshift.com/enterprise/3.0/using_images/db_images/mysql.html#environment-variables):

You must specify the user name, password, and database name. If you do not specify all three, the pod will fail to start and OpenShift will continuously try to restart it.

To set the values, try this:

oc set env dc/mysql MYSQL_USER=user MYSQL_PASSWORD=P@ssw0rd MYSQL_DATABASE=db1

This will update your deployment config with the variables. It should re-deploy automatically if you have configured it to update when the configuration changes, if it doesn't, run:

oc rollout latest

In the future, you can create the app with the environment variables set in the first place like this:

oc new-app -e \
MYSQL_USER=<username>,MYSQL_PASSWORD=<password>,MYSQL_DATABASE=<database_name> \
registry.access.redhat.com/openshift3/mysql-55-rhel7

See these docs for details.

The logs show:

Tips and tricks

How pod names work

The pods have names which give some detail. Here's how they work:

mysql-2-deploy

This means it is the second deployment of the msql service. This is the pod which is orchestrating your specific deployment.

mysql-2-6c009

This means it is the mysql service, deployed during the second deployment. The random six digits at the end come from the pod id, they have to be there because you could deploy many instances of a service to many pods.

Looking at pods

As you get more familiar with the commandline tool, you might find yourself running oc get pods and similar commands a lot. If you are linux, you can use the watch tool to help (on Mac, just do brew install watch). Then run:

watch -n 1 -d oc get pods

This command will show you a live view of the pods, updated every second:

watch       # run the following command repeatedly, showing the output
-n 1        # run every second (this is optional, the default is 2s)
-d          # show a diff, highlighting the changes as they happen
oc get logs # the command to watch

This command is super useful and you'll use it a lot!

Quickly get logs for a pod

Try this bash function:

function podlogs() {
  echo "Getting logs for $1 for the last $2 duration"
  oc logs -f --since=$2 `oc get pods | grep $1 | grep 'Running' | grep -Ev 'deploy' | awk '{print $1}'`
}

It'll let you run a command like this:

# get all logs for containers which match 'mysql' for the last 5 mins
podlogs mysql 5m

Kudos to my buddy Praba for the last tip.

Please update the question with the relevant logs and we can take it from there!

Dave Kerr
  • 5,117
  • 2
  • 29
  • 31
  • AFAIK, there hasn't been a strict need for ``--previous`` for a number of OpenShift versions. It was indeed needed previously when you had a ``CrashLoopBackoff``, but it should be a bit more intelligent about it now. If you are finding you still need this, would be interested to here more about the exact situation. – Graham Dumpleton Dec 10 '17 at 06:41
  • Hii Dave.Thanks for being in touch. I have added two more images in above question regarding "Monitoring&Events_Image", " Logs_image " for more detailed information @ Dave kerr – Karthik Dec 10 '17 at 18:26
  • Hi @karthik I'll update the answer with the steps to diagnose this problem, there'll need to be a little bit more. – Dave Kerr Dec 11 '17 at 01:40
  • You can use ``oc get pods --watch`` to monitor pods over time. Don't need to use UNIX ``watch`` command. – Graham Dumpleton Dec 11 '17 at 03:29
  • To get just name of pods without all the other fields, use ``oc get pods -o name``. That includes a ``pod/`` prefix though. So can also use ``oc get pods -o custom-columns=name:.metadata.name --no-headers`` to get name without prefix. You might want to use a ``--selector`` with label to qualify it to set of pods for an application. – Graham Dumpleton Dec 11 '17 at 03:30
  • Hii @DaveKer I have Updated my question with logs you could see it now. – Karthik Dec 11 '17 at 04:25
  • Thanks @GrahamDumpleton nice tip on the name with no prefix! I know about the `--watch` option just find myself defaulting to the UNIX tool as I use it for lots of other things too! – Dave Kerr Dec 11 '17 at 05:00
  • It is resolved.Thank you so much both of you.@DaveKerr – Karthik Dec 11 '17 at 15:22
  • problem is resolved. Thank you so much both of you @GrahamDumpleton – Karthik Dec 11 '17 at 15:22
  • 1
    Great news @Karthik! Would you mind accepting the answer so the question shows as completed? – Dave Kerr Dec 12 '17 at 03:12