0
   // 1 fixed thread

   implicit val waitingCtx = scala.concurrent.ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))

    // "map" will use waitingCtx

    val ss = (1 to 1000).map {n => // if I change it to 10 000 program will be stopped at some point, like locking forever
      service1.doServiceStuff(s"service ${n}").map{s =>
        service1.doServiceStuff(s"service2 ${n}")
      }
    }

Each doServiceStuff(name:String) takes 5 seconds. doServiceStuff does not have implicit ex:Execution context as parameter, it uses its own ex context inside and does Future {blocking { .. }} on it.

In the end program prints:

took: 5.775849753 seconds for 1000 x 2 stuffs

If I change 1000 to 10000 in, adding even more tasks : val ss = (1 to 10000) then program stops:

~17 027 lines will be printed (out of 20 000). No "ERROR" message will be printed. No "took" message will be printed

**And will not be processing any futher.

But if I change exContext to ExecutionContext.fromExecutor(null: Executor) (global one) then in ends in about 10 seconds (but not normally).

~17249 lines printed
ERROR: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
took: 10.646309398 seconds

That's the question : Why with fixed ex-context pool it stops without messaging, but with global ex-context it terminates but with error and messaging?

and sometimes.. it is not reproducable.

UPDATE: I do see "ERROR" and "took" if I increase pool from 1 to N. Does not matter how hight N is - it sill will be the ERROR.

The code is here: https://github.com/Sergey80/scala-samples/tree/master/src/main/scala/concurrency/apptmpl

and here, doManagerStuff2()

ses
  • 13,174
  • 31
  • 123
  • 226

1 Answers1

0

I think I have an idea of what's going on. If you squint enough, you'll see that map duty is extremely lightweight: just fire off a new future (because doServiceStuff is a Future). I bet the behavior will change if you switch to flatMap, which will actually flatten the nested future and thus will wait for second doServiceStuff call to complete.

Since you're not flattening out these futures, all your awaits downstream are awaiting on a wrong thing, and you are not catching it because here you're discarding whatever Service returns.


Update

Ok, I misinterpreted your question, although I still think that that nested Future is a bug.

When I try your code with both executors with 10000 task I do get OutOfMemory when creating threads in ForkJoin execution context (i.e. for service tasks), which I'd expect. Did you use any specific memory settings?

With 1000 tasks they both do complete successfully.

Tim
  • 2,008
  • 16
  • 22
  • and why it stops execution with one context and works ok with another? (I simplified my question) – ses Sep 14 '15 at 00:56
  • here I put example where one may wait "on wrong thing": https://github.com/Sergey80/scala-samples/blob/master/src/main/scala/concurrency/antipatterns/OneBasketExContext.scala but in my question's example I wait on separate thing - on ex1 for waiting anther for doing. – ses Sep 14 '15 at 01:00