1

We are using mongodb version 3 in a AWS environment with Linux AMIs.

Initially mongo was logging the entire document. We then lowered the verbosity in the yaml. That seemed to make most (99%) of the documents to not get logged. However we still find that it occasionally still logs the record. It seems to do a WRITE and then a COMMAND and both contain the entire record.

Is there anyway to ensure the document never gets written to the log while still having useful logging?

Thanks

systemLog:
  quiet: true
  destination: file
  path: /var/log/mongodb.log
  logAppend: true
  logRotate: rename
  traceAllExceptions: false
  timeStampFormat: iso8601-utc
  verbosity: 1 # This will be inherited by any component with verbosity -1
  component:
    accessControl:
      verbosity: -1 # NOTE: Negative one (-1) means "inherit"
    command:
      verbosity: 0 # MUST BE ZERO!!! Otherwise, inserted/updated records (all the data) will get logged.
    control:
      verbosity: -1
    geo:
      verbosity: 0
    index:
      verbosity: -1
    network:
      verbosity: -1
    query:
      verbosity: -1
    replication:
      verbosity: -1
    sharding:
      verbosity: 0
    storage:
      verbosity: -1
    write:
      verbosity: 0 # MUST BE ZERO!!! Otherwise, inserted/updated records (all the data) will get logged.

Version and Logs look like this. Please note I typed the data in so any invalid json or typos are due to me not mongo.

Version 3.0.6

TIMESTAMP I WRITE [conn0001] insert project.collection query {<insert our json document here>}
ninserted:1 
keyUpdates:0
writeConflicts:0
numYields:0
locks:{Global: {acquireCount: {r: 2, w: 2}}, MMAPV1Journal: {acquireCount: {w:2},aquireWaitCount: {w:2}, 
timeAquiringMicros: {w: 119326}}, Database: {acquireCount:w: 2}}, Collection" {acquireCount: {W:1}}, oplog: {acquireCount: {w: 1}}} 119ms



TIMESTAMP I COMMAND [conn0001] insert project.$cmd command:  insert {<insert our json document here>}
ninserted:1 
keyUpdates:0
writeConflicts:0
numYields:0
reslen: 80
locks:{Global: {acquireCount: {r: 2, w: 2}}, MMAPV1Journal: {acquireCount: {w:2},aquireWaitCount: {w:2}, 
timeAquiringMicros: {w: 119326}
timeAquiringMicros: {w: 119326}}, Database: {acquireCount:w: 2}}, Collection" {acquireCount: {W:1}}, oplog: {acquireCount: {w: 1}}} 119ms
user5524xx
  • 13
  • 4

1 Answers1

0

Your insert queries are being logged because they are considered slow queries — taking longer than the default operationProfiling.slowOpThresholdMs value of 100ms. As at MongoDB 3.2, there isn't any configuration for what details should be logged with slow queries as this context is useful to understand why the query is slow.

You can avoid logging slow inserts/commands by increasing the slowOpThresholdMs in your mongod configuration file. For example, setting a higher slowOpThresholdMs of 250ms might be enough to ensure most inserts aren't logged (although truly slow ones may still be):

operationProfiling:
    slowOpThresholdMs: 250

If you want to ensure slow operations are never logged you could set a much higher value, but this may suppress details that would be relevant to your deployment's performance.

Is there anyway to ensure the document never gets written to the log while still having useful logging?

Generally, useful logging for troubleshooting includes details of slow queries as well as connection/replication/authentication information (which you have suppressed with quiet:true).

Without logging those details you may have difficulty tuning and supporting a production environment.

If your concern is about access to private information in mongod log files, I would ensure you properly limit access to log files via O/S and filesystem permissions and either encrypt your backups or exclude sensitive log files. Access to view mongod server logs requires more permission than just logging in via the mongo shell, and anyone with permission to view the server logs presumably may also have access to copy the data files.

Since your deployment is on AWS you could consider Amazon EBS Encryption which will encrypt data at rest inside the volume, data moving between the volume and the instance, and all snapshots created from the volume.

Another option to consider would be encrypting sensitive fields in your application so they are never transmitted, logged, or saved in cleartext.

For more information on securing your deployment, see the MongoDB Security Checklist.

Stennie
  • 1,270
  • 7
  • 13
  • Thanks for the response. I am running a job now and cannot stop it. Once it is done I will test out the operationProfiling options. Thanks – user5524xx Jun 09 '16 at 12:14
  • @user5524xx FYI, you can also change the `slowms` at runtime (but the setting won't persist) using: `db.setProfilingLevel(0, slowms)`. For example, try setting to 250ms: `db.setProfilingLevel(0, 250)`. – Stennie Jun 09 '16 at 13:42
  • I set db.setProfilingLevel(0, 250) and no logs and then i set it back with db.setProfilingLevel(0) and no logs. I ran this on a much smaller set of data. I will have to wait until next week when I have to run a larger set of documents through to see what the outcome is. -- thanks – user5524xx Jun 09 '16 at 17:52
  • Hi Stennie. Well I ran 7/8 of my data through it with it set to 500ms and no leakage. I then changed the setting back to 100ms and I immediately got leakage. So it seems that is a solution. – user5524xx Jun 15 '16 at 17:47