1

I am writing a Go application that schedules timeouts in memory (using time.Timer). If the application crashes or restarts, the application is able to reload the timeouts (thanks to DB records) starting from the time of the restart, meaning that if one of the timeout should have fired between the time of the crash and the application is back up, it will be missed.

Ideally, all the timeouts that should have happened during the downtime should still fire (with a delay, but better than missed). My idea is to have the application write the current timestamp every second into a file (or SQLiteDB) while it's running. When the app restarts it can look at that latest timestamp and immediately fire all the timeouts between that timestamp and now (and schedule the others for the future).

Does this approach make sense and does it have pitfalls? Does this pattern has a name?

Absurdev
  • 284
  • 2
  • 8
  • It is unclear what is happening. Apparently your (unnamed) go app receives timestamped "start" events, "completed" events, and is responsible for sending "terminate" events at some future time if a completion event is not received. Does a restart mean the app, or the host OS, restarts? You are recording stamps to some storage, but it's unclear whether the storage (e.g. RAM) survives these "restart" events. – J_H Jan 16 '23 at 01:59

1 Answers1

0

You are describing a Distributed Computing setup. But it is racy.

When some cooperating partner sends you a "start" event, they probably ought to await your "ack" before embarking on anything adventurous. Else they won't know whether you (A.) heard and (B.) recorded the event. That is to say, lack of acknowledgement invites races and lost events when hosts might randomly reboot.

Ideally that partner would persist such events to stable storage on their own, before beginning an expensive operation.

Given that apparently there there's no ACKs, it sounds like your app needs to persist the event as soon as feasible, either across the LAN to a redundant host, or to a filesystem. A simple approach is to

  • receive the partner's message
  • write() a line, appending to a text file
  • fsync() to flush it from memory to disk / NVRAM / SSD

When you receive "completed" events, and when you execute "terminate" commands, log those as well. No need to fsync at once. Presumably other events arrive with some frequency, and they will flush all pending log records out to disk before long.

Upon restarting, just seek to near the end of the file and replay all logged events, setting up a bunch of timeout counters, and canceling them when the log reveals that they already finished. Some timeouts may fire immediately after we finish reading the log, because they are stale. Presumably it is harmless to issue a terminate(task_id) command for a task that already did a normal exit.


An alternate strategy, which does not depend so heavily on accurate logging, is to query the status of all currently running jobs when you come up. Set a conservative timeout in the somewhat distant future, and hope you stay up long enough to see such a time arrive.

Or use extra information, such as each task's size and start_time, to pick more sensible timeout values.


Consider using kafka, redis, or similar distributed message brokers to coordinate your cluster's actions, rather than relying on filesystem or RDBMS. There are low latency solutions available which do a good job of balancing Consistency, Availability, and Partition tolerance.

J_H
  • 17,926
  • 4
  • 24
  • 44