-1

We are using the architecture of NGINX + Gunicorn + Django + RQ, with several rq-workers. We use the basic Django logging setting (follows), with rollover and max-file-size, BUT:

  1. The created files are sometimes very small - a few bytes instead of the defined 2MB.
  2. The number rollover files is different than what was defined.

Questions:
a. Any idea why the actual file creation number and size are different then defined?
b. Is it possible to have each django rq-worker log to a different file, with rollover and max-file-size policy?

our logging settings:

LOG_FILE_MAX_SIZE_MB = int(os.environ.get('log_file_max_size_mb', 1))
LOG_FILES_ROTATE_NUM = int(os.environ.get('log_files_rotate_num', 8))

log_file_dir = os.path.dirname(LOG_FILE_FULL_PATH)
if not os.path.exists(log_file_dir):
    os.makedirs(log_file_dir, 0777)

DATE_TIME_FORMAT = "%Y-%m-%d %H:%M:%S"
VERBOSE_LINE_FORMAT = '%(asctime)s - %(levelname)s - %(process)d - %(thread)d     - %(filename)s - %(message)s'
SIMPLE_LINE_FORMAT = '[%(levelname)-7s] %(asctime)s - %(message)s'

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
'formatters': {
    'verbose': {
        'format': VERBOSE_LINE_FORMAT,
        'datefmt': DATE_TIME_FORMAT
    },
    'simple': {
        'format': SIMPLE_LINE_FORMAT,
        'datefmt': DATE_TIME_FORMAT
    },
},
'handlers': {
    'console': {
        'level': 'DEBUG',
        'class': 'logging.StreamHandler',
        'formatter': 'verbose'
    },
    'fat_app_logfile': {
        'level': 'DEBUG',
        'class': 'logging.handlers.RotatingFileHandler',
        'maxBytes': 1024*1024*LOG_FILE_MAX_SIZE_MB,
        'backupCount': LOG_FILES_ROTATE_NUM,
        'filename': LOG_FILE_FULL_PATH,
        'formatter': 'verbose'
    },
    'rq_app_logfile': {
        'level': 'DEBUG',
        'class': 'logging.handlers.RotatingFileHandler',
        'maxBytes': 1024*1024*LOG_FILE_MAX_SIZE_MB,
        'backupCount': LOG_FILES_ROTATE_NUM,
        'filename': LOG_FILE_FULL_PATH,
        'formatter': 'verbose'
    },
},
'loggers': {
    'MainLogger': {
        'handlers': ['console', 'fat_app_logfile'],
        'propagate': True,
        'level': 'DEBUG',
    },
    'rq_scheduler': {
        'handlers': ['console'],
        'level': 'DEBUG',
        'propagate': True,
    },
}

Thank you

user3139774
  • 1,295
  • 3
  • 13
  • 24

1 Answers1

0

The issue emerges from multi-processes trying to use the standard logging (logging.handlers.RotatingFileHandler) not supporting concurrent write, instead using cloghandler.ConcurrentRotatingFileHandler solves the issue.

For example in django setting.py:

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
# 'filters': {
#     'require_debug_false': {
#         '()': 'django.utils.log.RequireDebugFalse'
#     }
# },
'formatters': {
    'verbose': {
        'format': VERBOSE_LINE_FORMAT,
        'datefmt': DATE_TIME_FORMAT
    },
    'simple': {
        'format': SIMPLE_LINE_FORMAT,
        'datefmt': DATE_TIME_FORMAT
    },
},
'handlers': {
        'fat_app_logfile': {
        'level': 'DEBUG',
        'class': 'cloghandler.ConcurrentRotatingFileHandler',
        'maxBytes': 1024 * 1024 * LOG_FILE_MAX_SIZE_MB,
        'backupCount': LOG_FILES_ROTATE_NUM,
        'filename': LOG_FILE_FULL_PATH,
        'formatter': 'verbose'
    },

Related link: here

Community
  • 1
  • 1
user3139774
  • 1,295
  • 3
  • 13
  • 24