1

Currently we are working with nodeJS & LokiJS. as our application is dealing with real-time data; to communicate with external NoSQL/Relational DB will cause the latency problems.

So we decided to use in-memory database i.e, LokiJS.

LokiJs is good at when we are working with a collection which has 500-100 documents in it. but when it comes to the updates & parallel reads; it is worse.

meaning one of our vendor is published Kafka endpoint to consume the feed, and serve it to some external service again; From Kafka topic we are getting 100-200 events per second. So whenever, we received an Kafka event, we are updating the existing collection. As the delta updates are too frequent, LokiJS collection updates are not done properly by that read giving inconsistency results.

Here is my collection creation snippet.

    let db= new loki('c:\products.json', {
        autoload: true,
        autosave: true,
        autosaveInterval: 4000
    });
     this.collection= db.addCollection('REALTIMEFEED', {
                        indices: ["id", "type"],
                        adaptiveBinaryIndices: true,
                        autoupdate: true,
                        clone: true
                    });


function update(db, collection, element, id) {
    try {
        var data = collection.findOne(id);
        data.snapshot = Date.now();
        data.delta = element;
        collection.update(data);
        db.saveDatabase();
    } catch (error) {
        console.error('dbOperations:update:failed,', error)
    }
}

Could you please suggest me that, am I missing anything here.

Ram Kowsu
  • 711
  • 2
  • 10
  • 30

1 Answers1

1

I think your problem lies in the fact that you are saving the database at each update. You already specified an autoSave and an autoSaveInterval, so LokiJS is going to periodically save your data. If you also force saving from each update you are clogging the process, since JS is single-threaded so it has to handle most of the operation (it can keep running when the save operations is passed off to the OS for the file save bit).

Joe Minichino
  • 2,793
  • 20
  • 20
  • @Thanks for the reply, I will try this change and update the outcome & is there anything I have to verify, because some time updates happens in millis; some times it will take 2 to 5 mins. so, for periodical updates & parallel reads, what are the factors I have to consider. Making multiple collection basis on product type would that be better. Because for now, I am saving all the productions info in a single collection & which contains 500 to 1000 records – Ram Kowsu Jul 31 '19 at 17:26
  • So, If I wont call update/insert will it make an entry in the DB? which one is better way to handle this. In my case updates are huge. So please help me on this. I am searching like anything in the Google. So please help me on this. – Ram Kowsu Jul 31 '19 at 17:36
  • 1
    @Creator you need to call update, you just don't need to call saveDatabase, that way you are not clogging the process. At leas if i understand the case correctly – Joe Minichino Aug 01 '19 at 19:58
  • if I don't call saveDatabase and do just update() it will save in the period of what we defined in the LokiDB collection interval right? meaning it will save it products.json? – Ram Kowsu Aug 02 '19 at 06:16