I have a container that will be deployed several times upon request from my application, the difference between containers relies on the environment variables, they point to a different bd (PostgreSQL) and a different table (the table represents the path property of the health check). When I deploy the first service on my DC/OS 1.8 it works like a charm, however, the rest of the deployments don't work.
My app.json
looks as follows:
{
"volumes": null,
"id": "/data-microservices/ms1",
"cmd": null,
"args": null,
"user": null,
"env": {
"DATABASE_URL": "postgres://<username>:<password>@<host>:5432/<dbname>",
"TABLE": "<thetable>"
},
"instances": 1,
"cpus": 0.1,
"mem": 65,
"disk": 0,
"gpus": 0,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"docker": {
"image": "imtachu/data-microservice",
"forcePullImage": true,
"privileged": false,
"portMappings": [
{
"containerPort": 8080,
"protocol": "tcp"
}
],
"network": "BRIDGE"
}
},
"healthChecks": [
{
"protocol": "HTTP",
"path": "/api/<thetable>",
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"ignoreHttp1xx": false
}
],
"readinessChecks": null,
"dependencies": null,
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"labels": {
"HAPROXY_GROUP": "external",
"HAPROXY_0_VHOST": "dcos2-PublicSlaveL-KWSCFODW1ME5-878889582.us-east-1.elb.amazonaws.com"
},
"acceptedResourceRoles": null,
"residency": null,
"secrets": null,
"taskKillGracePeriodSeconds": null,
"portDefinitions": [
{
"protocol": "tcp",
"labels": {}
}
],
"requirePorts": false
}
So far, I have tried to modify the hostPort
property and changing the HAPROXY_0_VHOST
to HAPROXY_1_VHOST
also the requirePorts
so maybe I can have each container running on a different port.
I have tried deploying first a service pointing to table A and then another pointing to table B and viceversa, and the behavior is the same: First deployed service always work, the rest doesn't.