1

I'm kindof confused as to what responsibility Akka takes on when creating an actor system. I want to have a simple application of a parent and two child actors where each child resides on a different process(and therefore on different node). Now I know I can use a router with remote config or just start a remote actor, but(and correct me if I'm wrong) when creating this remote actor, Akka expects that the process already exists and the node is already running on that process, and then its only deploying that child actor to that node. Isn't there any way of making Akka do the spawning for us?

This is the code that isn't working because I haven't created the process myself:

application.conf:

akka {
  remote.netty.tcp.port = 2552
  actor {
    provider = "akka.remote.RemoteActorRefProvider"
  }
}

child {
  akka {
    remote.netty.tcp.port = 2550
    actor {
      provider = "akka.remote.RemoteActorRefProvider"
    }
  }
}

Parent.scala:

object Parent extends App{
  val system = ActorSystem("mySys")
  system.actorOf(Props[Parent],"parent")
}

class Parent extends Actor with ActorLogging{


  override def preStart(): Unit = {
    super.preStart()
    val address = Address("akka.tcp", "mySys", "127.0.0.1", 2550)
    context.actorOf(Props[Child].withDeploy(Deploy(scope = RemoteScope(address))), "child")
  }

  override def receive: Receive = {
    case x => log.info(s"Got msg $x")
  }
}

and Child.scala:

class Child extends Actor with ActorLogging{
  override def receive: Receive = {
    case x=> //Ignore
  }
}

But if I run this main inside Child.scala right after running the main on Parent.scala:

object Child extends App{
  ActorSystem("mySys", ConfigFactory.load().getConfig("child"))
}

class Child extends Actor with ActorLogging{
  override def receive: Receive = {
    case x=> //Ignore
  }
}

Then the node will connect.

If there isn't any way of doing that then how can Akka restart that process/node when the process crushes?

user_s
  • 1,058
  • 2
  • 12
  • 35

2 Answers2

1

You are responsible for creating, monitoring and restarting actor systems. Akka is only responsible for actors within those actor systems.

Ryan
  • 7,227
  • 5
  • 29
  • 40
  • Do you have a reference in the documentation of that? I haven't seen it mentioned anywhere – user_s Jan 13 '17 at 09:03
  • It's implicit in the docs. Akka doesn't handle spawning or restarting JVMs. It's out of scope. – Ryan Jan 13 '17 at 16:37
  • So lets say that I want to create child actors on different jvms, then if I'll have to spawn the process myself (and that process will run `ActorSystem("mySys")` with `akka.remote.netty.tcp.port = 0`) then how will the parent know on which port that process runs so it will be able to deploy the child actor on it? its seems odd that Akka that has a basic pattern of manager-workers doesn't handle the case of having remote workers (where the number of workers is dynamic so the processes cannot be initiated in advanced) – user_s Jan 14 '17 at 19:32
  • Akka doesn't provide service discovery, that's up to you. Zookeeper, etcd, Consul. Lots of choices. Akka is agnostic as to how it's deployed (which makes sense). – Ryan Jan 14 '17 at 20:32
0

It's not only not possible with Akka but in general no process can just spawn a new process on a different machine. Think about the security implications if that was possible! You always need some existing process on the target machine that spawns the new process for you, such as sshd or some resource/cluster manager.

So, passwordless SSH + a shell script is a thing typically done to start worker processes, e.g., by Hadoop, Spark, and Flink (the latter two using Akka under the hood, by the way).

Sebastian Kruse
  • 318
  • 2
  • 7