I have a 3 node test cluster and several jobs (simple config, no constraints, java services). My problem is every time I start a job it will be started on the first node. If I increase the count=2 and add a distinct host constraint there are also allocations on the other nodes. But if I start 50 jobs with count=1, there are 50 allocations on the first node and non on node2 or node3.
job "test" {
datacenters = ["dc1"]
type = "service"
group "test" {
count = 1
task "test" {
driver = "java"
config {
jar_path = "/usr/share/java/test.jar"
jvm_options = [
"-Xmx256m",
"-Xms256m"]
}
resources {
cpu = 300
memory = 256
}
}
}
Now I want to understand/see how Nomad selects the node for the allocations. All 3 nodes have the same resources - so the jobs should be distributed equally?
EDIT: Suddenly the jobs will be distributed. So my new question is: Is there a verbose output or something where I can see how and why Nomad choose a specific node while starting a new job.