I performed the following steps.
Created the replication controller with the following config file:
{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"fsharp-service",
"labels":{
"app":"fsharp-service"
}
},
"spec":{
"replicas":1,
"selector":{
"app":"fsharp-service"
},
"template":{
"metadata":{
"labels":{
"app":"fsharp-service"
}
},
"spec":{
"containers":[
{
"name":"fsharp-service",
"image":"fsharp/fsharp:latest",
"ports":[
{
"name":"http-server",
"containerPort":3000
}
]
}
]
}
}
}
}
Run the command:
kubectl create -f fsharp-controller.json
Here is the output:
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cassandra cassandra gcr.io/google-samples/cassandra:v8 app=cassandra 3
fsharp-service fsharp-service fsharp/fsharp:latest app=fsharp-service 1
$ kubectl get pods
NAME READY REASON RESTARTS AGE
cassandra 1/1 Running 0 28m
cassandra-ch1br 1/1 Running 0 28m
cassandra-xog49 1/1 Running 0 27m
fsharp-service-7lrq8 0/1 Error 2 31s
$ kubectl logs fsharp-service-7lrq8
F# Interactive for F# 4.0 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License
For help type #help;;
$ kubectl get pods
NAME READY REASON RESTARTS AGE
cassandra 1/1 Running 0 28m
cassandra-ch1br 1/1 Running 0 28m
cassandra-xog49 1/1 Running 0 28m
fsharp-service-7lrq8 0/1 CrashLoopBackOff 3 1m
$ kubectl describe po fsharp-service-7lrq8
W0417 15:52:36.288492 11461 request.go:302] field selector: v1 - events - involvedObject.name - fsharp-service-7lrq8: need to check if this is versioned correctly.
W0417 15:52:36.289196 11461 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0417 15:52:36.289204 11461 request.go:302] field selector: v1 - events - involvedObject.uid - d4dab099-04ee-11e6-b7f9-0a11c670939b: need to check if this is versioned correctly.
Name: fsharp-service-7lrq8
Image(s): fsharp/fsharp:latest
Node: ip-172-20-0-228.us-west-2.compute.internal/172.20.0.228
Labels: app=fsharp-service
Status: Running
Replication Controllers: fsharp-service (1/1 replicas created)
Containers:
fsharp-service:
Image: fsharp/fsharp:latest
State: Waiting
Reason: CrashLoopBackOff
Ready: False
Restart Count: 3
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Sun, 17 Apr 2016 15:50:50 -0700 Sun, 17 Apr 2016 15:50:50 -0700 1 {default-scheduler } Scheduled Successfully assigned fsharp-service-7lrq8 to ip-172-20-0-228.us-west-2.compute.internal
Sun, 17 Apr 2016 15:50:51 -0700 Sun, 17 Apr 2016 15:50:51 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id d44c288ea67b
Sun, 17 Apr 2016 15:50:51 -0700 Sun, 17 Apr 2016 15:50:51 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id d44c288ea67b
Sun, 17 Apr 2016 15:50:55 -0700 Sun, 17 Apr 2016 15:50:55 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id 688a3ed122d2
Sun, 17 Apr 2016 15:50:55 -0700 Sun, 17 Apr 2016 15:50:55 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id 688a3ed122d2
Sun, 17 Apr 2016 15:50:58 -0700 Sun, 17 Apr 2016 15:50:58 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 10s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"
Sun, 17 Apr 2016 15:51:15 -0700 Sun, 17 Apr 2016 15:51:15 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id c2e348e1722d
Sun, 17 Apr 2016 15:51:15 -0700 Sun, 17 Apr 2016 15:51:15 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id c2e348e1722d
Sun, 17 Apr 2016 15:51:17 -0700 Sun, 17 Apr 2016 15:51:31 -0700 2 {kubelet ip-172-20-0-228.us-west-2.compute.internal} FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 20s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"
Sun, 17 Apr 2016 15:50:50 -0700 Sun, 17 Apr 2016 15:51:44 -0700 4 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Pulling pulling image "fsharp/fsharp:latest"
Sun, 17 Apr 2016 15:51:45 -0700 Sun, 17 Apr 2016 15:51:45 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id edaea97fb379
Sun, 17 Apr 2016 15:50:51 -0700 Sun, 17 Apr 2016 15:51:45 -0700 4 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Pulled Successfully pulled image "fsharp/fsharp:latest"
Sun, 17 Apr 2016 15:51:46 -0700 Sun, 17 Apr 2016 15:51:46 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id edaea97fb379
Sun, 17 Apr 2016 15:50:58 -0700 Sun, 17 Apr 2016 15:52:27 -0700 7 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} BackOff Back-off restarting failed docker container
Sun, 17 Apr 2016 15:51:48 -0700 Sun, 17 Apr 2016 15:52:27 -0700 4 {kubelet ip-172-20-0-228.us-west-2.compute.internal} FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 40s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"
What is wrong?
How can I find out the reason why the controller won't start correctly?
UPDATE.
I have tried to change the simple "fsharp/fsharp:latest" image to another image where there would be a service listening to a port, this is how I want to use the container.
The image is called "username/someservice:mytag" and has a service listening to the port 3000.
I run the service as:
mono Service.exe
When I look at the logs I see this:
$ kubectl logs -p fsharp-service-wjmpv
Running on http://127.0.0.1:3000
Press enter to exit
So the container is in the same state even though the process shouldn't exit:
$ kubectl get pods
NAME READY REASON RESTARTS AGE
fsharp-service-wjmpv 0/1 CrashLoopBackOff 9 25m
I also tried to run the container from my image with the -i flag, to make the container not exit, but kubectl doesn't seem to recognize -i flag :\
Any thoughts?