0

I could be asking a design pattern question here.

On Android, I am using a thread pool to open 8 threads to download some files.

    try {
        ExecutorService pool = Executors.newFixedThreadPool(8);
        for (int i = 0; i < someList.size(); i++) {
            pool.submit(new DownloadJsonTask(someList.get(i), context));
        }
        pool.shutdown();
        pool.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
    } catch (Exception e) {
    }

I noticed if I use one thread to download one-by-one, then I hardly get download fails but if I use 8 threads, then I sometimes get download fails. I am not a server/network person so I don't know in detail but I am guessing the server is putting a limit of one device (or one IP address) trying to connect multiple connections.

If this is the reason, then how do I design the code to overcome this issue? I have already implemented to try downloading 3 times before failing. It does seem to fixed it "for now". However, I know my code is not robust and it can fail at one point.

I figured, I wouldn't be the first one facing this issue. I would like to know a robust solution around this issue.

Solutions I could think of:
- Try to download at least 3 times before failing
- Once failed, then try to sleep for a random amount of time. So that failed threads don't wake up at the same time and fail again.
- If the server throws some kind of unique message back such as Server busy, then re-try unlimited(?)(large amount of) times.

I have not yet implemented above possible solutions. I want to know the common/best solution first and spend time implementing it.

Any ideas?

jclova
  • 5,466
  • 16
  • 52
  • 78

1 Answers1

2

This question is opinion based. I'm sharing my views on it.

Ideally, if you can check the server logs and find it is something at the server end that should be fixed, you should definitely do that first.

In addition to this, there can always be network failures even if the client and server are able to multithread and handle concurrency. That said, you should have a retry mechanism at your client side.

Some design points regarding the retry policy

  1. Keep the retry attempts configurable instead of fixing it to 3. (You can come up with the correct number of attempts on profiling and testing)
  2. Instead of sleeping for a random amount of time, try exponential backoff.
  3. Instead of retrying unlimited number of times, we can flag a message to the user to try again later after an exponential back off retry ourselves. (Something like 'Site under load. Please try after some time').
  4. Check out libraries that do the retries for you. Something like this.
  5. Network bandwidth might also be a reason to have failed downloads. You can monitor the network speed and type (WiFi, LTE, 3G) etc and decide whether to download or schedule for later.

Another article here.

Hope this helps.

Community
  • 1
  • 1
Pravin Sonawane
  • 1,803
  • 4
  • 18
  • 32