I'm trying to test ci/cd with gitea and drone but it is stuck in pending.
I was able to verify if my gitea is connected to my drone-server
here is my .drone.yaml
kind: pipeline
type: docker
name: arm64
platform:
os: linux
arch: arm64
steps:
- name: test
image: 'golang:1.10-alpine'
commands:
- go test
- name: build
image: 'golang:1.10-alpine'
commands:
- go build -o ./myapp
- name: publish
image: plugins/docker
settings:
username: mjayson
password:
from_secret: docker_pwd
repo: mjayson/sample
tags: latest
- name: deliver
image: sinlead/drone-kubectl
settings:
kubernetes_server:
from_secret: k8s_server
kubernetes_cert:
from_secret: k8s_cert
kubernetes_token:
from_secret: k8s_token
commands:
- kubectl apply -f deployment.yml
I have set up gitea and drone in my k8s cluster. Configuration below
apiVersion: v1
kind: ConfigMap
metadata:
name: drone-config
namespace: dev-ops
data:
DRONE_GITEA_SERVER: 'http://192.168.1.150:30000'
DRONE_GITEA_CLIENT_ID: '746a6cd1-cd31-4611-971b-e005bb80e662'
DRONE_GITEA_CLIENT_SECRET: 'O-NpPnTiFyIGZwqN7aeNDqIWR1sGIEJj8Cehcl0CtVI='
DRONE_RPC_SECRET: '1be6d1769148d95b5d04a84694cc0447'
DRONE_SERVER_HOST: '192.168.1.150:30001'
DRONE_SERVER_PROTO: 'http'
DRONE_LOGS_TRACE: 'true'
DRONE_LOGS_PRETTY: 'true'
DRONE_LOGS_COLOR: 'true'
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: drone-server-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/infra/drone"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drone-server-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
kind: Service
apiVersion: v1
metadata:
name: drone-server-service
spec:
type: NodePort
selector:
app: drone-server
ports:
- name: drone-server-http
port: 80
targetPort: 80
nodePort: 30001
- name: drone-server-ssh
port: 443
targetPort: 443
nodePort: 30003
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone-server-deployment
labels:
app: drone-server
spec:
replicas: 1
selector:
matchLabels:
app: drone-server
template:
metadata:
labels:
app: drone-server
spec:
containers:
- name: drone-server
image: drone/drone:1.9
ports:
- containerPort: 80
name: gitea-http
- containerPort: 443
name: gitea-ssh
envFrom:
- configMapRef:
name: drone-config
volumeMounts:
- name: pv-data
mountPath: /data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: drone-server-pvc
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-deployment
labels:
app: drone-runner
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner
template:
metadata:
labels:
app: drone-runner
spec:
containers:
- name: drone-runner
image: 'drone/drone-runner-kube:latest'
ports:
- containerPort: 3000
name: runner-http
env:
- name: DRONE_RPC_HOST
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_SERVER_HOST
- name: DRONE_RPC_PROTO
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_SERVER_PROTO
- name: DRONE_RPC_SECRET
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_RPC_SECRET
- name: DRONE_RUNNER_CAPACITY
value: '2'
- name: DRONE_LOGS_TRACE
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_TRACE
- name: DRONE_LOGS_PRETTY
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_PRETTY
- name: DRONE_LOGS_COLOR
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_COLOR
and here is the drone server logs
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:16:27Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:16:57Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:17:07Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:17:37Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:17:47Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:18:17Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:18:27Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:18:57Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:19:07Z",
"type": "kubernetes",
my drone runner log
time="2020-08-08T19:13:07Z" level=info msg="starting the server" addr=":3000"
time="2020-08-08T19:13:07Z" level=info msg="successfully pinged the remote server"
time="2020-08-08T19:13:07Z" level=info msg="polling the remote server" capacity=2 endpoint="http://192.168.1.150:30001" kind=pipeline type=kubernetes
Not sure how to deal with it as this is my first time facing such issue.I also tried updating the drone server image from 1
to 1.9
still nothing happens