I'm currently trying to develop an application that relies heavily on heroku workers (executing a NodeJS script, no choice in switching to Ruby/Rails here) handling long running (1 - 168 hour) background jobs. My issue is that certain jobs may finish in 1 hour, while others may take 168, and I don't want to have to wait for all my workers to be finished to start scaling down as Heroku will charge me for that time on each worker.
I have no issue with the dynos restarting once a day, but I would like to know if it's possible (and if so how) to scale a specific Heroku worker down through the Heroku API or through any other means (perhaps from within the worker process itself, though terminating the process from within only seems to lead to the worker restarting itself, not scaling itself down).
If this is not possible, then I'd like to know if anyone knows how to capture a "scaling down event" (i.e. is some signal sent to the random worker that's to be scaled down, like a SIGTERM or SIGKILL?).
Any advice at all is appreciated.