-1

I am getting 503 after exactly 30 sec while exporting all user data from react app.

export const get = (
  url: string,
  queryParams: Object = {},
  extraHeaders: Object = {},
  responseType: string = 'text',
  callback?: number => void
): Promise<*> =>
  superagent
    .get(url)
    .timeout({
      response: 500000, 
      deadline: 600000 
    })
    .use(noCache)
    .set('Accept-Language', (i18n.language || 'en').split('_')[0])
    .set(extraHeaders)
    .responseType(responseType)
    .query(queryParams)
    .on('progress', e => {
      if (callback) {
        callback(e.percent)
      }
    })

Technology Stack used: Akka http (backend), react js (front end), Nginx (docker image). i have tried to access akka api directly with curl command request executed in 2.1min successfully data exported in .csv file.

Curl command : curl --request GET --header "Content-Type: text/csv(UTF-8)" "http://${HOST}/engine/export/details/31a0686a-21c6-4776-a380-99f61628b074?dataset=${DATASET_ID}" > export_data.csv 

NOTE: on my local env. i am able to export all records from react UI in 2.5 min. but this issue is coming on TEST site. and test site is setup with docker env. images for this application.

Error At Browser Console:

GET http://{HOST}/engine/export/details/f4078a63-85bc-43ac-b9a9-c58f6c8193da?dataset=mexico 503 (Service Unavailable)
Uncaught (in promise) Error: Service Unavailable
    at Request.<anonymous> (vendor.js:1)
    at Request.Emitter.emit (vendor.js:1)
    at XMLHttpRequest.t.onreadystatechange (vendor.js:1)

this is coming on PRODUCTION and TEST site. then only difference in local and test site is docker images.

Could you please help me for the same?

Thank you in advance.

  • Can you make the export operation run orders of magnitude faster; answering the HTTP response in seconds, not minutes? A 30- or 60-second HTTP timeout is pretty common, and it's very possible some intermediary (like the Nginx) will have a timeout like what you describe. – David Maze Jul 25 '21 at 11:22
  • Thanks , yes I have increased all connection time outs at backend it works fine as I mention by using curl it woks fine .. But some how its creating issue on TEST env .. With docker images .. Please suggest any other option for the same – Shaikh Abutalib Jul 25 '21 at 13:11
  • One way to build this: when you make the original HTTP POST request to initiate the request, it returns immediately with a pollable URL (maybe with an HTTP 202 status code). The client periodically attempts to HTTP GET that URL until it gets an HTTP 200 response back. All of these HTTP requests should return essentially instantly (HTTP requests that take minutes to respond will almost definitely fail in many environments). An asynchronous message queue, like RabbitMQ, could also be a good design choice. – David Maze Jul 25 '21 at 13:40
  • in my case: first post request hits : /export/details with basic details like username and other details(like active filter) [Created 201 as http code]then based on this export id will create GET method /export/details/{export_id}. which will use for download CSV file . in my other java application i was able to download csv file even after 2.5 min long request.. some how i am not able to stress this point. nothing found in log – Shaikh Abutalib Jul 25 '21 at 16:24
  • and i am curious as its return 503 exactly after 30 sec. i have tested this with stopwatch :(. hoping some help from stack overflow – Shaikh Abutalib Jul 25 '21 at 16:39

1 Answers1

1

On your local machine you have plenty of resources. On your remote host, responding with 503, you have exceeded capacity in one of four resource types:

  • CPU
  • RAM
  • DISK
  • Network

These are ordered by least expensive to most expensive. Both Disk and Network are typically off-bus, with network orders of magnitude slower than any other access type.

On your local machine I am guessing you have exclusive access, so locked resources that need cleanup are a non-issue. You also have arbitrated (non exclusive) access to the environment when requests are concurrent with others. It could be something as simple as running out of file handles/file descriptors to satisfy your query because the back end hosts do not clean up orphaned connections fast enough.

If you have nailed down all of the differences between your two configurations and there are no differences (just local vs remote), then you are left with the resource problem of other users on the system

Performance Engineering Standard Model

James Pulley
  • 5,606
  • 1
  • 14
  • 14
  • Thanks you this app hosted on AWS EC2 instance , I have monitor all these factors in could watch seems CPU reached to only 57 max and memory utilization 37 % max .. Server is enough capable to server so many requests .. And my local machine have lower configuration as compare to TEST and PROD , .. Is this issue related to docker or any other component .. Kindly help me for the same – Shaikh Abutalib Jul 25 '21 at 13:08
  • and as i mention there is only one diff in my local and Test env is.. Test env running with docker images and nginx .. my local running with webpack-developer and akka http as server – Shaikh Abutalib Jul 25 '21 at 16:42
  • If you suspect a difference between environments, then run the docker image local. My suspicion is a simply over subscription of resources on the network stack related to a different number of users in each location – James Pulley Jul 26 '21 at 16:44
  • ok, but i have just created dummy api e.g. /test/export and added on line Thread.sleep(30000) and one hello statement, this api also showing same error 503 on console, some how i'm getting some issue to run local docker images. is there any other time out settings in nginx proxy . ? – Shaikh Abutalib Jul 27 '21 at 07:30
  • You just locked resources for 30 seconds on your API. High velocity/request interfaces need to lock/block resources for the absolute minimal amount of time. Allocate as late as possible and deallocate/return to pool as quickly as possible. – James Pulley Jul 27 '21 at 13:57
  • Thanks James, if i tired to run the API using direct curl command (without rote from nginx) then i am able to see all export result all user details in csv file (1.5 min). curl command : curl --request GET --header "Content-Type: text/csv(UTF-8)" --header "http://${HOST}/engine/user/export/details/00f8eb97-b091-4a14-8f52-9bafdbfdf505?dataset=${DATASET_ID}" > tes_data_latestSite_1.csv please suggest any other way to fine tune some time out parameters at front end app or other way to troubleshoot this issue. – Shaikh Abutalib Jul 27 '21 at 16:21
  • OK, so going direct gets you the result. Through nginx gets you 503. My hypothesis is that the 503 is in the nginx component. Speak with nginx your admin about setting the client timeout appropriately. It may be too aggressive for your request which is long running. Realistically, you should probably pull this directly from the data storage tier unless you think that a customer is going to do this in production. You would avoid all of the overhead and get the results from a bulk copy output of a query – James Pulley Jul 27 '21 at 17:22