Very long running requests are in general error-prone. A connection reset could result in restarting the whole process again. Even though this won't fix the underlaying problem you described with digital ocean, I think it's worth you consider a different solution. I recommend to split up your heavy, long-running task into many small tasks and use a queue system.
Nestjs provides a very good documentation using queues and the bull package.
I added a basic example, with two solutions:
Queue consumer
shopify.consumer.ts
import { Processor, Process } from '@nestjs/bull';
import { Job } from 'bull';
@Processor('shopify')
export class ShopifyConsumer {
constructor(
private shopifyService: ShopifyService
) {}
@Process('fetch')
async transcode(job: Job<unknown>) {
await this.shopifyService.fetch(job.requestKey);
}
}
Option a) Generate all requests at once and let the queue process them:
shopify.service.ts
import { Injectable } from '@nestjs/common';
import { Queue } from 'bull';
import { InjectQueue } from '@nestjs/bull';
@Injectable()
export class ShopifyService {
constructor(
@InjectQueue('shopify') private shopifyQueue: Queue
) {}
generateJobs(requestKeys: string[]) {
for (const requestKey of requestKeys) {
await this.shopifyQueue.add('fetch', {
requestKey
});
}
}
fetch(requestKey: string) {
// Fetch data
const res = await fetch('...')
}
}
Option b) Generate a new queue job after every response
shopify.service.ts
import { Injectable } from '@nestjs/common';
import { Queue } from 'bull';
import { InjectQueue } from '@nestjs/bull';
@Injectable()
export class ShopifyService {
constructor(
@InjectQueue('shopify') private shopifyQueue: Queue
) {}
fetch(requestKey: string) {
// Fetch data
const res = await fetch('...')
// Add next job to queue if more chunks are available
if (res.nextRequestKey) {
await this.shopifyQueue.add('fetch', {
requestKey: res.nextRequestKey
})
}
}
}