0

I created an index as shown below and added a createdAt field to each new record added to the db. The records should be auto-deleted after 24 hours however I have waited days and nothing has been deleted.

db_connect.collection("records").createIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )

Adding record to the database

// Add info to db to search
    console.log("Adding info to database..");
    const otherAddress = e.target.address.value;
    const newRecord = {
        holderAddress: this.props.account,
        otherAddress: otherAddress,
        date: new Date(),
        data: encryptedData
    }
    await fetch("http://localhost:420/record/add", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify(newRecord),
    }).catch(error => {
        window.alert(error);
    });

Endpoint it calls:

recordRoutes.route("/record/add").post(function (req, response) {
let db_connect = dbo.getDb();
let myobj = {
    holderAddress: req.body.holderAddress,
    otherAddress: req.body.otherAddress,
    data: req.body.data,
    createdAt: req.body.date
};
db_connect.collection("records").insertOne(myobj, function (err, res) {
    if (err) throw err;
    response.json(res);
});

Below is a screenshot from MongoDB website confirming there is an index..

enter image description here

Output from: db_connect.collection("records").stats().then(r => { console.log(r)});

 createdAt_1: {
  metadata: [Object],
  creationString: 'access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none',
  type: 'file',
  uri: 'statistics:table:index-3854-9052224765860348301',
  LSM: [Object],
  'block-manager': [Object],
  btree: [Object],
  cache: [Object],
  cache_walk: [Object],
  'checkpoint-cleanup': [Object],
  compression: [Object],
  cursor: [Object],
  reconciliation: [Object],
  session: [Object],
  transaction: [Object]
}

Any help is greatly appreciated!

  • Can you view the index parameters from the GUI? Is `ttl` there? – rickhg12hs Mar 11 '22 at 10:38
  • I cant see anything more about the index via the GUI, however I have updated the question with the info I have :) – Amber Johnson Mar 11 '22 at 11:20
  • What does `db.records.getIndexes()` show? – rickhg12hs Mar 11 '22 at 14:05
  • ... and this too. `db.getCollectionInfos( { name: "records" } )`. Is this a time series or a capped collection? – rickhg12hs Mar 11 '22 at 14:32
  • [ { name: 'records', type: 'collection', options: {}, info: { readOnly: false, uuid: UUID("9c90e662-bb88-4e6a-9a4c-8556abfa1a48") }, idIndex: { v: 2, key: { _id: 1 }, name: '_id_' } } ] – Amber Johnson Mar 11 '22 at 15:30
  • [ { v: 2, key: { _id: 1 }, name: '_id_' }, { v: 2, key: { createdAt: 1 }, name: 'createdAt_1', expireAfterSeconds: 120 } ] – Amber Johnson Mar 11 '22 at 15:30
  • No it is not a capped collection – Amber Johnson Mar 11 '22 at 15:30
  • Seems the docs should expire after 2 minutes. :-/ Are the `createdAt` values in the documents what you expect? I.e., they're not set into the future somehow or not a real Date Object? – rickhg12hs Mar 11 '22 at 15:52
  • Sadly not :/ this is an example entry: createdAt: "2022-03-09T17:13:11.497Z" data: "33586b52b46c60887906f1b9c6eded6a4f34b78c3f7105f3d8846bf6039ec2e12ayIk9K0+OXJhB5SHDeqAgJ3WYPr9bExsxB7h16CT3akrc8zMHALz2+Gk1A==d1298595cbec55cf911e3ed77f45b7e8ac684f87dfa0e5d43a71952ad646dbeb" expireAfterSeconds: 120 holderAddress: "0x647DD1F1Ae4F2127A9b9FBb513e39b22a8551Db5" otherAddress: "0x647DD1F1Ae4F2127A9b9FBb513e39b22a8551Db5" _id: "6228e027a10f84480e1191d6" – Amber Johnson Mar 11 '22 at 16:26
  • Your `createdAt` isn't a string, is it? When I look at my dates in `mongosh` they look like `createdAt: ISODate("2022-03-11T20:04:42.098Z")`. – rickhg12hs Mar 11 '22 at 20:11
  • Hmm it looks like they could be as when I send my HTTP req to add to the db, the body containing the parameters including createdAt is JSON.stringify 'ed – Amber Johnson Mar 13 '22 at 12:37
  • 1
    This has fixed my issue, thank you so much! – Amber Johnson Mar 13 '22 at 13:04

1 Answers1

1

Expiration of data requires that the indexed field value be a BSON date, or an array of BSON dates.

MongoDB Reference Manual

Expiration of Data
TTL indexes expire documents after the specified number of seconds
has passed since the indexed field value; i.e. the expiration threshold
is the indexed field value plus the specified number of seconds.

If the field is an array, and there are multiple date values in the index,
MongoDB uses lowest (i.e. earliest) date value in the array to calculate
the expiration threshold.

If the indexed field in a document is not a date or an array that holds
one or more date values, the document will not expire.

If a document does not contain the indexed field, the document will not
expire.
rickhg12hs
  • 10,638
  • 6
  • 24
  • 42