1

I have a small development web server, that I use to write missing translations into files.

app.post('/locales/add/:language/:namespace', async (req, res) => {
  const { language, namespace } = req.params
  // I'm using fs.promises
  let current = await fs.readFile(`./locales/${language}/${namespace}.json`, 'utf8')
  current = JSON.parse(current)

  const newData = JSON.stringify({ ...req.body, ...current }, null, 2)
  await fs.writeFile(`./locales/${language}/${namespace}.json`, newData)
})

Obviously, when my i18n library does multiple writes into one file like this:

fetch('/locales/add/en/index', { body: `{"hello":"hello"}` })
fetch('/locales/add/en/index', { body: `{"bye":"bye"}` })

it seems like the file is being overwritten and only the result of the last request is saved. I cannot just append to the file, because it's JSON. How to fix this?

Alex Chashin
  • 3,129
  • 1
  • 13
  • 35
  • You will have to use some sort of concurrency control to keep two concurrent requests that are both trying to write to the same resources form interfering with each other. – jfriend00 Aug 27 '19 at 04:17
  • @jfriend00, but how could I implement it? I tried creating an object, that contains data from all files, and on every request instead of writing file, I first added data to that object, then wrote the content of the object. But for some reason that didn't work either. And yes, I'm using fs.promises as I wrote in a comment – Alex Chashin Aug 27 '19 at 04:21
  • If you have lots of different files that you may be writing to and perhaps multiple servers writing to it, then you pretty much have to use some sort of file locking, either OS-supplied or manually with lock files and have subsequent requests wait for the file lock to be cleared. If you have only on server writing to the file and a manageable number of files, then you can create a file queue that keeps track of the order of requests and when the file is busy and it can return a promise when it's time for a particular request to do its writing. – jfriend00 Aug 27 '19 at 04:25
  • FYI, this is what databases are good at - managing concurrency. – jfriend00 Aug 27 '19 at 04:25
  • You also might consider a different file format that you can directly append to (such as CSV), though you would still need concurrency control. – jfriend00 Aug 27 '19 at 04:27
  • @jfriend00, I thought about using the database, but I have many reasons not to do it in my particular case, because that will rather add headache than reduce it :) I also thought about implementing such a queue, but for some reason I didn't do it. I'll try now – Alex Chashin Aug 27 '19 at 04:28
  • I have no experience with this package, but this sounds like something that might be useful: https://www.npmjs.com/package/lockfile. I don't know if this will guarantee proper ordering of multiple requests, but it will guarantee one at a time access. – jfriend00 Aug 27 '19 at 04:29
  • Also, this one: https://www.npmjs.com/package/proper-lockfile – jfriend00 Aug 27 '19 at 04:32

1 Answers1

2

You will have to use some sort of concurrency control to keep two concurrent requests that are both trying to write to the same resources form interfering with each other.

If you have lots of different files that you may be writing to and perhaps multiple servers writing to it, then you pretty much have to use some sort of file locking, either OS-supplied or manually with lock files and have subsequent requests wait for the file lock to be cleared. If you have only on server writing to the file and a manageable number of files, then you can create a file queue that keeps track of the order of requests and when the file is busy and it can return a promise when it's time for a particular request to do its writing

Concurrency control is always what databases are particularly good at.

I have no experience with either of these packages, but these are the general idea:

https://www.npmjs.com/package/lockfile

https://www.npmjs.com/package/proper-lockfile

These will guarantee one at a time access. I don't know if they will guarantee that multiple requests are granted access in the precise order they attempted to acquire the lock. If you need that, you might have to add that on top with some sort of queue.

Some discussion of this topic here: How can I lock a file while writing to it asynchronously

jfriend00
  • 683,504
  • 96
  • 985
  • 979
  • I have few files and a single development server, so I could make a simple queue to overcome this. Thanks for help, anyways! – Alex Chashin Aug 27 '19 at 04:46
  • @AlexChashin - Yeah, I've created a queue class that I instantiate for each file that I want exclusive write access to. The write methods then acquire a local process lock (by just setting a flag) and then do their thing, releasing the lock when done. Any write that gets called while the lock is active, puts the data in the queue. The code that releases the lock, writes the data in the queue. I have log entries whenever the concurrecy conflict is detected (and the queue gets used) and it happens several times a day in my little home automation server so I know the concept works and is needed. – jfriend00 Aug 27 '19 at 04:50