Is it possible to deploy a Lagom Application as a standalone running jar or Docker Container? And if yes, how?
Asked
Active
Viewed 871 times
1 Answers
2
Yes, it is possible to deploy a Lagom application as a standalone JAR/Docker container. In order to do this, you can follow these steps.
- Configure Cassandra Contact Points: If you are planning to use dynamic service location for your service but need to statically locate Cassandra, which is obvious in Production, then modify the
application.conf
of your service. Also, disable Lagom'sConfigSessionProvider
and fall back to the one provided inakka-persistence-cassandra
, which uses the list of endpoints listed in contact-points. Your Cassandra configuration should look something like this-
cassandra.default {
## list the contact points here
contact-points = ["127.0.0.1"]
## override Lagom’s ServiceLocator-based ConfigSessionProvider
session-provider = akka.persistence.cassandra.ConfigSessionProvider
}
cassandra-journal {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
cassandra-snapshot-store {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
lagom.persistence.read-side.cassandra {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
- Provide Kafka Broker Settings (If you are using Kafka Message Broker): Next step is to provide Kafka broker settings if you plan to use Lagom's streaming service. For this, you need to modify the application.conf of your service, if Kafka service is to be statically located, which is the case when your service acts only like a consumer, otherwise, you do not need to give following configurations.
lagom.broker.kafka {
service-name = ""
brokers = "127.0.0.1:9092"
client {
default {
failure-exponential-backoff {
min = 3s
max = 30s
random-factor = 0.2
}
}
producer = ${lagom.broker.kafka.client.default}
producer.role = ""
consumer {
failure-exponential-backoff = ${lagom.broker.kafka.client.default.failure-exponential-backoff}
offset-buffer = 100
batching-size = 20
batching-interval = 5 seconds
}
}
}
- Create Akka Cluster: At last, we need to create an Akka cluster on our own. Since we are not using ConductR, we need to implement the joining yourself. This can be done by adding the following lines in
application.conf
.
akka.cluster.seed-nodes = [
"akka.tcp://MyService@host1:2552",
"akka.tcp://MyService@host2:2552"]
Now, we know what configurations we need to provide to our service, let's take a look at the steps of deployment. Since we are using just java -cp
command, we need to package our service and run it. To simplify the process, we have created a shell script for it.
For a complete example, you can refer to our GitHub repo - Lagom Scala SBT Standalone project.
I hope it helps!

himanshuIIITian
- 5,985
- 6
- 50
- 70
-
1Was this answer helpful? – himanshuIIITian Jan 30 '19 at 04:33
-
1Thanks for that. I will try it. :) – André Schmidt Feb 12 '19 at 11:53
-
Great, Perfect! – himanshuIIITian Feb 13 '19 at 04:42
-
1Hi again! Now I had the time to setup my project. I have a few services. All in docker. A security-service and a patient-service. Both are in the same docker network. I exposed the hosts: 15000:15000, 14999:9000, 16000:8080, 15500:15500 (security) with akka.management.http.port = 15500 and lagom.services.patient-service = "http://"${REMOTE_IP}":16002" and tryied also with patient-service = "http://"${REMOTE_IP}":15002". MY Problem is now if I contact to 14999 I get always the security service api not the others. How I can achieve that? I want normally all services injected to one domain call. – André Apr 03 '21 at 09:11