0

I have one pod with 2 containers

1 . default scheduler

 containers:
    - name: my-scheduler
      image: >-
        build-releases-external.common.cdn.repositories.cloud.sap/google_containers/kube-scheduler-amd64:v1.19.0-rc.1
      command:
        - /usr/local/bin/kube-scheduler
        - '--scheduler-name=my-scheduler'
        - '--config=/home/config/config.yaml'

config.yaml
leaderElection:
  leaderElect: true
  resourceName: my-scheduler
  resourceNamespace: ns
  1. extender controller bases on kube builder
  - name: myExtender
      image: >-
        myimage.....
      command:
        - /usr/local/bin/devspace-scheduler-extender
        - '--commonConfig=my-config.yaml'
        - '--enable-leader-election=true'

I want that the leader election will be to the same containers in the same pod.I saw that when the deployment was 2 than I got 2 pods but the leader election were in different pods How I can sync them to be in the same pod ?

user1365697
  • 5,819
  • 15
  • 60
  • 96
  • 1
    If the two containers are in the same pod, they'll have the same lifecycle and they can never get disconnected from each other; I don't think a leader-election system makes sense here. Usually you'd run only one container in a pod, and if you have a leader-election setup, it would run across (an odd number of) pods. Can you show a complete working example, and more specific evidence of the problem you're seeing? – David Maze Aug 16 '21 at 11:23
  • I will split it for two pods – user1365697 Aug 16 '21 at 11:37

0 Answers0