1

Hi I'm very newby in Istio/K8s, and I'm trying to make a service that I have test-service to use a new VirtualService that I've created.

Here the steps that I did

 kubectl config set-context --current --namespace my-namespace

I create my VirtualService

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: test-service
  namespace: my-namespace
spec:
  hosts:
  - test-service
  http:
  - fault:
      delay:
        fixedDelay: 60s
        percentage:
          value: 100
    route:
    - destination:
        host: test-service
        port:
          number: 9100

Then I apply into K8s

kubectl apply -f test-service.yaml

But now when I invoke the test-service using gRPC I can reach the service, but the fault with the delay is not happening.

I dont know in which log I can see of this test-service is using the VirtualService that I created or not

Here my gRPC Service config:

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "test-service",
        "namespace": "my-namespace",
        "selfLink": "/api/v1/namespaces/my-namespace/services/test-service",
        "uid": "8a9bc730-4125-4b52-b373-7958796b5df7",
        "resourceVersion": "317889736",
        "creationTimestamp": "2021-07-07T10:39:54Z",
        "labels": {
            "app": "test-service",
            "app.kubernetes.io/managed-by": "Helm",
            "version": "v1"
        },
        "annotations": {
            "meta.helm.sh/release-name": "test-service",
            "meta.helm.sh/release-namespace": "my-namespace"
        },
        "managedFields": [
            {
                "manager": "Go-http-client",
                "operation": "Update",
                "apiVersion": "v1",
                "time": "2021-07-07T10:39:54Z",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:metadata": {
                        "f:annotations": {
                            ".": {},
                            "f:meta.helm.sh/release-name": {},
                            "f:meta.helm.sh/release-namespace": {}
                        },
                        "f:labels": {
                            ".": {},
                            "f:app": {},
                            "f:app.kubernetes.io/managed-by": {},
                            "f:version": {}
                        }
                    },
                    "f:spec": {
                        "f:ports": {
                            ".": {},
                            "k:{\"port\":9100,\"protocol\":\"TCP\"}": {
                                ".": {},
                                "f:port": {},
                                "f:protocol": {},
                                "f:targetPort": {}
                            }
                        },
                        "f:selector": {
                            ".": {},
                            "f:app": {}
                        },
                        "f:sessionAffinity": {},
                        "f:type": {}
                    }
                }
            },
            {
                "manager": "dashboard",
                "operation": "Update",
                "apiVersion": "v1",
                "time": "2022-01-14T15:51:28Z",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:spec": {
                        "f:ports": {
                            "k:{\"port\":9100,\"protocol\":\"TCP\"}": {
                                "f:name": {}
                            }
                        }
                    }
                }
            }
        ]
    },
    "spec": {
        "ports": [
            {
                "name": "test-service",
                "protocol": "TCP",
                "port": 9100,
                "targetPort": 9100
            }
        ],
        "selector": {
            "app": "test-service"
        },
        "clusterIP": "****************",
        "type": "ClusterIP",
        "sessionAffinity": "None"
    },
    "status": {
        "loadBalancer": {}
    }
}
paul
  • 12,873
  • 23
  • 91
  • 153
  • Do you have your ports properly labelled in the Service (not VirtualService, but a K8s resource)? Could you include your service yaml? –  Jan 17 '22 at 07:00
  • I include the service config. Thanks – paul Jan 17 '22 at 09:21
  • Change port name to "grpc" and try again. Istio have a convention, that ports must be correctly named. –  Jan 17 '22 at 09:28
  • Optionally, if you are using Kubernetes 1.18+, you can add `appProtocol: grpc` to port definition, and leave the name as is. –  Jan 17 '22 at 09:34
  • where in service config? – paul Jan 17 '22 at 09:35
  • in `spec.ports` of your Service yaml (kubernetes Service, not Istio's VirtualService) –  Jan 17 '22 at 09:37
  • and I change port by grpc? so "grpc:":9100? – paul Jan 17 '22 at 09:38
  • No, change `name: test-service` to `name: grpc` –  Jan 17 '22 at 09:41
  • I tried that last week because I read in some places, and then the service was not working, even removing the fault configuration of the VirtualService. I will try again and I will check the istio side-car logs – paul Jan 17 '22 at 09:46
  • after change the name, one grpc service cannot reach the other one using the name test-service, and I cannot see any new request in the service side-car log – paul Jan 17 '22 at 10:08
  • which name you changed? Service or port? –  Jan 17 '22 at 10:10
  • I change this "name": "test-service" by "name": "grpc" I did not touch anything else – paul Jan 17 '22 at 10:11
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/241133/discussion-between-p10l-and-paul). –  Jan 17 '22 at 10:26

1 Answers1

2

According to the Istio documentation, configuring fault only works for HTTP traffic, not for gRPC:

https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection

KubePony
  • 144
  • 1
  • 7
  • 1
    @paul about the logs - you can check the logs of the istio-sidecar containers that are deployed with your service to see what's going on. As far as I know all configuration is in Envoy sidecars. You can also try getting the Envoy config file like so: kubectl exec -it -c istio-proxy -- cat /etc/istio/proxy/envoy-rev0.json and check if your fault is added to the config or not – KubePony Jan 14 '22 at 16:14