11

I have a collection with the field called "contact_id". In my collection I have duplicate registers with this key.

How can I remove duplicates, resulting in just one register?

I already tried:

db.PersonDuplicate.ensureIndex({"contact_id": 1}, {unique: true, dropDups: true}) 

But did not work, because the function dropDups is no longer available in MongoDB 3.x

I'm using 3.2

Xavier Guihot
  • 54,987
  • 21
  • 291
  • 190
Jhonathan
  • 330
  • 1
  • 2
  • 14

5 Answers5

29

Yes, dropDups is gone for good. But you can definitely achieve your goal with little bit effort.

You need to first find all duplicate rows and then remove all except first.

db.dups.aggregate([{$group:{_id:"$contact_id", dups:{$push:"$_id"}, count: {$sum: 1}}},
{$match:{count: {$gt: 1}}}
]).forEach(function(doc){
  doc.dups.shift();
  db.dups.remove({_id : {$in: doc.dups}});
});

As you see doc.dups.shift() will remove first _id from array and then remove all documents with remaining _ids in dups array.

script above will remove all duplicate documents.

Saleem
  • 8,728
  • 2
  • 20
  • 34
  • Hi. Partial worked. When I put in a small collection works fine. But when I execute in a big collection the databases "lock" and others query goes to timeout. – Jhonathan Mar 02 '16 at 12:44
  • 1
    Well, this is expected especially in very large collections. MongoDB aggregation pipeline is limited to 100MB RAM usage. See https://docs.mongodb.org/manual/aggregation/. again your question didn't mentioned anything about your collection size. This solution gives you idea how to solve a problem and is not covering all edge cases. So you'll need to put some efforts too. – Saleem Mar 02 '16 at 12:49
  • Thanks for help. I'll search about this. – Jhonathan Mar 02 '16 at 12:51
  • For large collections, you may need to use the [Bulk Write Operations](https://docs.mongodb.com/v3.0/core/bulk-write-operations/) feature. – Vince Bowdren Aug 18 '16 at 09:50
  • I just want to add that you can group by multiple keys by listing them and you can list whole document (not only keys you group by) for more context. This is how: [{$group: {_id: ["$key1", "$key12", "$key42"], dups: {$push: "$$ROOT"}, count: {$sum: 1}}}, {$match: {count: {$gt: 1}}}] – meridius Jul 20 '23 at 11:01
5

this is a good pattern for mongod 3+ that also ensures that you will not run our of memory which can happen with really big collections. You can save this to a dedup.js file, customize it, and run it against your desired database with: mongo localhost:27017/YOURDB dedup.js

var duplicates = [];

db.runCommand(
  {aggregate: "YOURCOLLECTION",
    pipeline: [
      { $group: { _id: { DUPEFIELD: "$DUPEFIELD"}, dups: { "$addToSet": "$_id" }, count: { "$sum": 1 } }},
      { $match: { count: { "$gt": 1 }}}
    ],
    allowDiskUse: true }
)
.result
.forEach(function(doc) {
    doc.dups.shift();
    doc.dups.forEach(function(dupId){ duplicates.push(dupId); })
})
printjson(duplicates); //optional print the list of duplicates to be removed

db.YOURCOLLECTION.remove({_id:{$in:duplicates}});
steveinatorx
  • 705
  • 9
  • 22
2

We can also use an $out stage to remove duplicates from a collection by replacing the content of the collection with only one occurrence per duplicate.

For instance, to only keep one element per value of x:

// > db.collection.find()
//     { "x" : "a", "y" : 27 }
//     { "x" : "a", "y" : 4  }
//     { "x" : "b", "y" : 12 }
db.collection.aggregate(
  { $group: { _id: "$x", onlyOne: { $first: "$$ROOT" } } },
  { $replaceWith: "$onlyOne" }, // prior to 4.2: { $replaceRoot: { newRoot: "$onlyOne" } }
  { $out: "collection" }
)
// > db.collection.find()
//     { "x" : "a", "y" : 27 }
//     { "x" : "b", "y" : 12 }

This:

  • $groups documents by the field defining what a duplicate is (here x) and accumulates grouped documents by only keeping one (the $first found) and giving it the value $$ROOT, which is the document itself. At the end of this stage, we have something like:

    { "_id" : "a", "onlyOne" : { "x" : "a", "y" : 27 } }
    { "_id" : "b", "onlyOne" : { "x" : "b", "y" : 12 } }
    
  • $replaceWith all existing fields in the input document with the content of the onlyOne field we've created in the $group stage, in order to find the original format back. At the end of this stage, we have something like:

    { "x" : "a", "y" : 27 }
    { "x" : "b", "y" : 12 }
    

    $replaceWith is only available starting in Mongo 4.2. With prior versions, we can use $replaceRoot instead:

    { $replaceRoot: { newRoot: "$onlyOne" } }
    
  • $out inserts the result of the aggregation pipeline in the same collection. Note that $out conveniently replaces the content of the specified collection, making this solution possible.

Xavier Guihot
  • 54,987
  • 21
  • 291
  • 190
0

maybe it be a good try to create a tmpColection, create unique index, then copy data from source, and last step will be swap names?

Other idea, I had is to get doubled indexes into array (using aggregation) and then loop thru calling the remove() method with the justOne parameter set to true or 1.

 var itemsToDelete = db.PersonDuplicate.aggregate([
{$group: { _id:"$_id", count:{$sum:1}}},
{$match: {count: {$gt:1}}},
{$group: { _id:1, ids:{$addToSet:"$_id"}}}
])

and make a loop thru ids array makes this sense for you?

profesor79
  • 9,213
  • 3
  • 31
  • 52
0

I have used this approach:

  1. Take the mongo dump of the particular collection.
  2. Clear that collection
  3. Add a unique key index
  4. Restore the dump using mongorestore.
Rajesh Patel
  • 1,946
  • 16
  • 20