1

I am able to execute SparkPi in k8s and deployed (in GKE) as well.

But, when I am trying to broadcast PI value to my microservice which is in toys-broadcast-svc.toys.svc.cluster.local

I am unable to resolve DNS (getting UnknownHostException) . Can anyone help? Am I missing something here?

For your information:

  • I have Installed the operator with helm helm install sparkoperator incubator/sparkoperator --namespace toys-spark-operator --set sparkJobNamespace=toys-spark,enableWebhook=true

  • I am using spark-operator (microservice are in namespace called toys and spark is in namespace called toys-spark)

apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-pi
  namespace: toys-spark #apps namespace
spec:
  type: Java
  mode: cluster
  image: toysindia/spark:3.0.1
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi 
  mainApplicationFile: local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar 
  sparkVersion: 3.0.1
  restartPolicy:
    type: Never
  volumes:
    - name: "toys-spark-test-volume-driver"
      hostPath:
        path: "/host_mnt/usr/local/storage/k8s/dock-storage/spark/driver"
        type: Directory
    - name: "toys-spark-test-volume-executor"
      hostPath:
        path: "/host_mnt/usr/local/storage/k8s/dock-storage/spark/executor"
        type: Directory
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 3.0.1
    serviceAccount: spark
    volumeMounts:
      - name: "toys-spark-test-volume-driver"
        mountPath: "/host_mnt/usr/local/storage/k8s/dock-storage/spark/driver"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 3.0.1
    volumeMounts:
      - name: "toys-spark-test-volume-executor"
        mountPath: "/host_mnt/usr/local/storage/k8s/dock-storage/spark/executor"
  sparkConf:
    spark.eventLog.dir: 
    spark.eventLog.enabled: "true"
---
apiVersion: v1
kind: Namespace
metadata:
  name: toys-spark-operator
---
apiVersion: v1
kind: Namespace
metadata:
  name: toys-spark #apps namespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: spark
  namespace: toys-spark #apps namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: spark-operator-role
  namespace: toys-spark #apps namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: edit
subjects:
  - kind: ServiceAccount
    name: spark
    namespace: toys-spark #apps namespace
Jacksquad
  • 23
  • 6
  • The `UnknownHostException` indicates that the IP address of a host could not be determined (is a Java exception as spark is based on this language). As per it seems to be an error on DNS resolution, I suggest to follow the [DNS Debugging documentation](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) to find what could be the source of the problem. Could you share with us some of the output of the steps mentioned in the documentation? – Nahuel Apr 13 '21 at 13:01
  • 1
    Yes. It was due to DNS not being resolved. After I made the service available and DNS was resolved, the problem i had was gone. thanks for the help @Nahuel – Jacksquad Apr 26 '21 at 15:50

0 Answers0