0

Our application logs a large amount of data and as a result we have to archive the data and move it off of the production machine once per week. Right now this is a manual process, but I am automating it. Basically, I use 'mongodump' and compress the output, then move it into the cloud and delete the logged data on the production machine.

My question is how do I ensure mongodump was successful before I delete all of the documents in the database? Basic Pseudo code below:

if(mongodumpIsSuccessful)
    {
        //delete all document in log collection
    }

else
    {
        //handle failed mongodump
    }

I have looked through the documentation but I can't seem to find anything. Is there a better way to accomplish what I am trying to do that doesn't use mongodump? Thanks. it

jteezy14
  • 436
  • 4
  • 11

1 Answers1

0

I'm using a bash script for the dump portion and then, if successful, doing the delete via a Java application (because in our use case, we need to delete from several related collections).

mongodump --db ${DB} --host ${HOST} --collection $1 \
    --out ${ARCHIVE_PATH} \
    --query <query> \
    >> $LOG 2>&1

if [ $? -ne 0 ]; then
    echo "mongodump failed for collection $1, exiting" &>> $LOG
    cat $LOG | mail -s "$HOST/$DB mongo archive & delete" -r ${FROM} ${MAILTO}
    exit 1
fi

java -cp ".:lib/*" DeleteBeforeDate ${DATE} ${DB} ${HOST} >> $LOG 2>&1
RESULT=$?

if [ $RESULT -eq 0 ]; then
    rm $LOG
else
    cat $LOG | mail -s "$HOST/$DB mongo archive & delete" -r ${FROM} ${MAILTO}
    rm $LOG
    exit ${RESULT}
fi

exit 0
Kyrstellaine
  • 441
  • 5
  • 16