2

What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?

enter image description here

Here's the deployment and service file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy-deployment
  namespace: myown
  labels:
    app: application-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: application-a
  template:
    metadata:
      labels:
        app: application-a
    spec:
      containers:
      - name: application-a
        image: registry.gitlab.com/application-a
        resources:
          requests:
            memory: "230Mi"
            cpu: "100m"
          limits:
            memory: "460Mi"
            cpu: "200m"
        imagePullPolicy: Always
        ports:
        - containerPort: 8090
        env:
        - name: "HTTP_PROXY"
          value: "http://localhost:1030"
      - name:
        image: registry.gitlab.com/application-b-proxy
        resources:
          requests:
            memory: "230Mi"
            cpu: "100m"
          limits:
            memory: "460Mi"
            cpu: "200m"
        imagePullPolicy: Always
        ports:
        - containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
  name: proxy-svc
  namespace: myown
spec:
  ports:
  - nodePort: 31000
    port: 8090
    protocol: TCP
    targetPort: 8090
  selector:
    app: application-a
  sessionAffinity: None
  type: NodePort

And here's how i build the docker image of mitmproxy/mitmdump

FROM mitmproxy/mitmproxy:latest

ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]

EDIT

I created two dummy docker images in order to have this scenario recreated locally.

APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.

APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.

  1. Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here

  2. Open a terminal and run this command:

 docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
  1. Open another terminal and run this command:
    docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
  1. Go into the shell with the container of application A in 3rd terminal:
    docker exec -ti <name of docker container> sh

and try to make curl to whatever address you want.

And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes

uiguyf ufdiutd
  • 142
  • 1
  • 10
  • Containers inside a Pod share a network space, it's straightforward as that, you don't even need to specify `containerPort` in order to make them comunicate. I managed to reproduce it between a ubuntu and a nginx container, when logging inside the ubuntu container if I do `curl localhost:80` it returns the nginx page. I'm trying to reproduce your scenario with your docker image, care to post the get_token.py and your app-a docker file? If they are confidential take a moment to explain how the communication should work, from user http request to the reply he should get with the token. – Will R.O.F. Mar 30 '20 at 15:08
  • I will try to explain it in details. I have a web application, the front-end and back-end are both kept as a separate docker images. Application A is just the back-end part of that web application. I have also an application which is a data keeper, so when the user lets say tries to enter a page /users to list all of the users, he makes an HTTP request to the storage application through the back-end. And the thing is that his request is not authorized, so that proxy should intercept that HTTP request and add a token (also taken from another app). But my proxy just can't intercept any HTTP reqs – uiguyf ufdiutd Mar 30 '20 at 17:39
  • Two containers inside a pod have free communication between then but you can't proxy all trafic from one of them. Instead you have to code your app to do this procedure of connecting actively to the other container in order to get the token. Here are two guides which can help you to set up your app: [Nginx Auto Request](https://developer.okta.com/blog/2018/08/28/nginx-auth-request) and [Validating Oauth 2 Access Token Nginx](https://www.nginx.com/blog/validating-oauth-2-0-access-tokens-nginx/). – Will R.O.F. Apr 01 '20 at 10:00
  • @willrof, do you mean I cannot simply proxy a HTTP traffic from a docker container? And this Nginx should replace my mitmproxy or I need to deploy it along with my proxy? Because I dont get the idea – uiguyf ufdiutd Apr 05 '20 at 19:15
  • Hi, I'm sorry, I'll explain better, I mean that the traffic is not forced to go through the proxy (In Kubernetes we call it a [sidecar container](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#understanding).) So, as long as your app is directing the traffic, it should work. I'll try to reproduce your scenario as similar as possible. While I do that can you log into `container a` with `kubectl exec POD_NAME -c application-a -- /bin/bash` and try to curl `localhost:1030` to confirm it's responsive? Also when you say it is not working, what kind of error are you getting? – Will R.O.F. Apr 05 '20 at 23:26
  • Hey, thanks for your help. I tried to curl localhost:1030 and I'm getting a response (though it is invalid but I think it's because that I should not make curl to the proxy itself, and I have reproduced that locally and I'm getting the same response so I think it should be that way and it's ok). The main issue I am having is that when an HTTP request is sent from Application A it should be intercepted by my proxy, but as I said my proxy doesn't intercept anything at all, it is like an HTTP request never left the container with Application A and I do not how to make proxy "listen" – uiguyf ufdiutd Apr 06 '20 at 08:43
  • I have done some research and I am sure that the issue exists because though I set HTTP_PROXY env to the container with Application A it doesn't do anything – uiguyf ufdiutd Apr 06 '20 at 13:07
  • Since my first reproduction with nginx and ubuntu I saw your approach on kubernetes was correct, it probably was something inside the proxy or the app. I can post you an answer with a simpler example to show you how this mechanism works inside kubernetes. – Will R.O.F. Apr 06 '20 at 13:17
  • 1
    @willrof, if you posted a complete solution for that issue I'd be very grateful – uiguyf ufdiutd Apr 07 '20 at 07:10
  • I added the example, explained as best as I could and provided you a few links to explore further. I hope it is valuable to you! – Will R.O.F. Apr 07 '20 at 19:32
  • Proxies do not intercept traffic, traffic is directed to the proxy by the application or by routing. – Ron Maupin Apr 07 '20 at 20:34
  • @uiguyfufdiutd Did you had the chance to look at the explanation I provided? – Will R.O.F. Apr 10 '20 at 11:27
  • 1
    @willrof actually I have investigated everything you provided in this topic. I'd like to thank you very much because I imagine that it must have been time-consuming and you for sure it took some time. Unfortunately, it didn't solve my issue. I will edit my original post to give more details. – uiguyf ufdiutd Apr 10 '20 at 14:26
  • @uiguyfufdiutd you are welcome, at least I could clarify the kubernetes part to you. I'll thank you if you could upvote my answer showing it was helpful and well-researched. And after you edit your question with more information, I can try to help you further. – Will R.O.F. Apr 10 '20 at 14:30
  • @willrof, I have edited my question. I wrote the steps to reproduce my setup, it can be deployed locally instead of Kubernetes. – uiguyf ufdiutd Apr 13 '20 at 17:36
  • it is still pointing that something in your app-a code is not right. both kubernetes and docker examples proved the Proxy is working but the way your app is doing the request is not. Have you tried using diferent approaches to your goal as I suggested in my answer? – Will R.O.F. Apr 16 '20 at 13:42

1 Answers1

3

Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:

  • Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
  • Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
  • You was able to login to container application-a and send a curl request to container application-b-proxy on port 1030, proving the above statement.
  • The problem is that your proxy is not intercepting the request as expected.
  • You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
  • Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
  • In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.

Reproduction:

  • I'll use a nginx server as container-a.

  • I'll build it with this Dockerfile:

FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
  • I'll add this configuration file frontend.conf:
server {
    listen 80;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.

  • I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .

$ docker images 
REPOSITORY        TAG       IMAGE ID        CREATED          SIZE
nginxproxy    latest    7c203a72c650    4 minutes ago    126MB
  • Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy-deployment
  labels:
    app: application-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: application-a
  template:
    metadata:
      labels:
        app: application-a
    spec:
      containers:
      - name: container-a
        image: nginxproxy:latest
        ports:
        - containerPort: 80
        imagePullPolicy: Never
      - name: container-b
        image: echo8080:latest
        ports:
        - containerPort: 8080
        imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
  name: proxy-svc
spec:
  ports:
  - nodePort: 31000
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: application-a
  sessionAffinity: None
  type: NodePort    

NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.

I'll list the changes I made to help you link it to your current environment:

  • container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
  • container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.

  • First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:

$ kubectl apply -f nginx.yaml 
pod/nginx created
service/nginx-svc created

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx                              1/1     Running   0          64s
$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-svc    NodePort    10.103.178.109   <none>        80:31491/TCP   66s

$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
  • I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml 
pod/echo created
service/echo-svc created

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
echo                               1/1     Running   0          118s
$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
echo-svc     NodePort    10.102.168.235   <none>        8080:32116/TCP   2m

$ curl http://192.168.39.51:32116
{
  "path": "/",
  "headers": {
    "host": "192.168.39.51:32116",
    "user-agent": "curl/7.52.1",
  },
  "method": "GET",
  "hostname": "192.168.39.51",
  "ip": "::ffff:172.17.0.1",
  "protocol": "http",
  "os": {
    "hostname": "echo"
  },
  • Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml 
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
proxy-deployment-9fc4ff64b-qbljn   2/2     Running   0          1s

$ k get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
proxy-svc    NodePort    10.103.238.103   <none>        80:31000/TCP   31s
  • Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
  "path": "/",
  "headers": {
    "host": "127.0.0.1:8080",
    "user-agent": "curl/7.52.1",
  },
  "method": "GET",
  "hostname": "127.0.0.1",
  "ip": "::ffff:127.0.0.1",
  "protocol": "http",
  "os": {
    "hostname": "proxy-deployment-9fc4ff64b-qbljn"
  },
  • As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.

Considerations:

I Hope to help you with this example.

Will R.O.F.
  • 3,814
  • 1
  • 9
  • 19