I have a Jenkins server, an Ansible server and a Web server all running as ec2 instances. Jenkins server is configured as "GitHub hook trigger for GITScm polling". And also it will copy files (Ansible Playbook, Dockerfile, Deployment and Service definition files on Ansible and Web server's ubuntu instances home directory i.e /home/ubuntu)
I have a Deployment file with following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfirstdevopsappdeployment
spec:
replicas: 5
selector:
matchLabels:
name: myapp
template:
metadata:
labels:
name: myapp
spec:
containers:
- name: myapp
image: kubemubin/devops-project-one
ports:
- containerPort: 8080
I have a Service file with following content:
kind: Service
apiVersion: v1
metadata:
name: myfirstdevopsservice
spec:
selector:
name: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30000
type: NodePort
After the build either through push to Github or manually from Jenkins panel, the updated files are pushed successfully on Ansible and Web server ec2 instances.
On my Web server instance, when I run
kubectl get all
I get successful output as follows:
NAME READY STATUS RESTARTS AGE
pod/myfirstdevopsappdeployment-65d7bf8557-8fn2x 1/1 Running 0 11s
pod/myfirstdevopsappdeployment-65d7bf8557-8hvv2 1/1 Running 0 11s
pod/myfirstdevopsappdeployment-65d7bf8557-f6nxc 1/1 Running 0 11s
pod/myfirstdevopsappdeployment-65d7bf8557-pnr7v 1/1 Running 0 11s
pod/myfirstdevopsappdeployment-65d7bf8557-sb8vz 1/1 Running 0 11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 18h
service/myfirstdevopsservice NodePort 10.99.236.141 8081:30000/TCP 11s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/myfirstdevopsappdeployment 5/5 5 5 11s
NAME DESIRED CURRENT READY AGE
replicaset.apps/myfirstdevopsappdeployment-65d7bf8557 5 5 5 11s
My minikube status command gives the following output:
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
However I am not able to access the pod website on the internet from my browser:
http://<ec2-public-ip-of-web-server>:<service-port>
I have put all these instances in same Security Group. What should I do more to access the website running in pods from the browser on the internet?
I also logged in to one of the pod:
kubectl exec --stdin --tty -- /bin/sh
I could see the relevant files in /var/www/html
directory of the pod
I also executed:
while true;do kubectl port-forward --address 0.0.0.0 svc/myfirstdevopsservice 8080:8081;done
I got this error:
Handling connection for 8080 E0505 05:42:52.702332 119206 portforward.go:409] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod b79c09ba2260934dd48905c44f6e546a1dfa93a1154d094e1e81a89f22652540, uid : exit status 1: 2023/05/05 05:42:52 socat[110790] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused error: lost connection to pod