0

I am trying to read from one Mongo collection and then take all that data and save it to a new collection. I am using the same schema, but changing the collection name in two different models :

UserInfo.js :

const mongoose = require('mongoose')

const userSchema = mongoose.Schema({
    userID : String,
    userName : String
})

let userInfo = mongoose.model('UserInfo', userSchema)
let backUpUserInfo = mongoose.model('BackUpUserInfo', userSchema)

module.exports = {
    userInfo : userInfo,
    backUpUserInfo : backUpUserInfo
}

I inserted data into the userinfos collection and then I am trying to read that and insert into the backupuserinfos collection :

backup.js :

const UserInfo = require('../Schema/UserInfo').userInfo;
const BackUpUserInfo = require('../Schema/UserInfo').backUpUserInfo;

async function backUp(){ 
    UserInfo.find({}, async function(err1, userInfo){   
        if (err1) return console.log(err1) 
        for(let i in userInfo){
            try{
                let backUpUser = new BackUpUserInfo(userInfo[i])
                await backUpUser.save(function (err2, user) {
                    if (err2) return console.error(err2)
                    console.log("collection updated")
                })
            }
            catch(err){
                console.log(err)
            }
        }
    })
}

I am getting an error that seems to indicate that it is finding a duplicate when trying to save the information, even though the backup collection is empty. This makes me think it is trying to write to the user info collection instead of the back up one. I created the backUp model variable and that is what I run save on, which makes me think it should be specified to run to that collection. Am I doing something wrong here?

Here is the error :

VersionError: No matching document found for id

csean11
  • 13
  • 4
  • What is your motivation for doing this? If you're trying to create a backup for security reasons, then the proper way to do this is by running a mongodump command against your mongodb process. This will create a compressed backup of all your data that you can then store somewhere. Not only is this more efficient, it's also a much better safety measure. If something happens and your database gets corrupted somehow, your backup collection will get destroyed. The dump files are on your hard drive and wouldn't be affected in that scenario. – Charles Desbiens Aug 17 '20 at 20:21
  • I am trying to set up a daily process that will take the current collection and throw that into the back up collection. The back up collection will have a total of 6 days worth of data at all times with the time to live set up to kill it once it reaches a week old. With mongodump am I able to do that? – csean11 Aug 17 '20 at 20:58
  • What is your budget for this project? There are a lot of complicated issues that can arise from trying to set up a custom backup strategy. You could save yourself a lot of headaches by moving over to a cloud database as service provider like MongoDB Atlas. Even the cheap plans come with pretty robust managed backups that are almost certainly going to be better than whatever custom solution you create. On top of that, Atlas comes with a ton of other useful features. – Charles Desbiens Aug 17 '20 at 21:16
  • But to answer your question though, I have done exactly this with a bash script. I set up a cron job on my server, and once per day it mongodumps the database. It's not complicated. You can do it in 2 lines. – Charles Desbiens Aug 17 '20 at 21:19
  • I am using CosmosDB with Azure, but I am very new to it and was unaware if this functionality is provided there. You are saying that this auto backup can be done code free through Azure? – csean11 Aug 17 '20 at 21:28
  • I don't know anything about Azure CosmosDB, but I would be surprised if they didn't have a backup service. DBaaS solutions pretty much always have them. I know Atlas does. The way it works is you can tell them how often you want the data to be backed up, and how long you want them to keep the backups. No code, it's just a setting in your dashboard. – Charles Desbiens Aug 17 '20 at 22:27
  • Thanks so much for the help Charles. I will go forward using the Azure functionality. – csean11 Aug 18 '20 at 17:08

1 Answers1

1

Easiest way in mongoose would be

UserInfo.aggregate([ { $match: {} }, { $out: "BackupUserInfo" } ])

Else you can use mongodb query OR mongodb copyTo()

db.myoriginal.aggregate([ { $match: {} }, { $out: "mycopy" } ])

//OR

db.source.copyTo("target"); 

But copyTo() is deprecated since version 3.0.

Radical Edward
  • 2,824
  • 1
  • 10
  • 23
  • I looked up the aggregate documentation, but I am unsure what it does in this scenario. I am trying to take a current collection and place all the contents into a back up collection. Would aggregate help in that scenario? – csean11 Aug 17 '20 at 21:01
  • the problem with using the $out command is that it replaces the target collection. It's not made for regular backups, it's meant to store the result of a query. Every time you run this, you will lose your old data. – Charles Desbiens Aug 17 '20 at 21:22
  • I definitely want to prevent that, thanks for pointing that out. – csean11 Aug 17 '20 at 21:28
  • @ do you need to copy document `_id` as well? – Radical Edward Aug 18 '20 at 06:28