2

I'm developing an iOS-app that uses an API. The API has a text search, and we've implemented a debouncer to ensure that we don't perform a search every time the user writes a character in the search field. We wait 0.2 seconds after every each character to see if there will come another character. If the user writes "Washingto" and uses 0.205 seconds to type the last "n", we first initiate a search for the first word, and then immediately cancel it after 0.005 seconds. We cancel requests using standard client API (e.g URLSessionTask.cancel for iOS).

This has been working perfectly for a year. But today, we received a warning from our (external) API, saying that we need to stop cancelling our own queries like this. Their reason being that their gateway Apigee doesn't handle it very well, and they receive miles of logs with status code 499.

As far as I can tell, they imply that we have to keep the request and receive the response, meaning more data usage (mobile), more processor (parsing) and thus more battery usage.

Is this "problem" really only resolved like this? I thought we were doing best practice from our clients here, but they claim that Apigee claims that clients shouldn't close connections like this.

Also, I'm not sure which tags to use or where to ask this..

Sti
  • 8,275
  • 9
  • 62
  • 124
  • You are using a good practice. Who exactly told you that it is bad? – Lu_ Aug 06 '19 at 09:17
  • The operators (owners) of the API we're using. They stated literally (translated) *"The background for this is primarily that our gateway product - Apigee - doesn't handle this very well. They also claim that it isn't a good practice client implementation. That may be discussed, and I see your reasoning for cancelling unusable request, but apigee is leading on this subject, so it can't be 'standard practice' to do so".* And I'm struggling with finding any documentation from Apigee/Google that says anything about cancelling requests. – Sti Aug 06 '19 at 09:22
  • you should probably find some Apigee forum or ask on their site what are their recommendations. In the end you can have ID for requests and don't parse canceled ones – Lu_ Aug 06 '19 at 09:26
  • Don't bother with cancelling the existing request; just let it complete and replace the results with the second set of results results when they come through. – Paulw11 Aug 06 '19 at 12:13

1 Answers1

-2

only some (partial) observations:

1) executing queries locally uses LESS power, as you do not have to raise antenna, power up wifi/4G or similar...

2) you are sending millions of https request to server in few seconds (I hope you have 100K user.. :) typing 4 or more chars...) So also the server is under pressure.

2) user experience waiting for more than few ms is poor.

I can understand your db is on remote, but a far better experience would derive using a local (reduced) Sqlite Db (we downloaded it zipped in background)

ingconti
  • 10,876
  • 3
  • 61
  • 48
  • It's dynamic data, so it's not possible to have it locally. And I'm not sure you understood the question.. We have 800K users, but the server(s) isn't under pressure at all. Their problem is that we are cancelling the requests, not that there are too many requests. They want us to stop cancelling the requests, sit tight and wait for the response and parse it and discard it, instead of actively telling the server that we don't want the response. – Sti Aug 06 '19 at 09:00
  • I do understand. a real problem. – ingconti Aug 06 '19 at 10:19