139

While deploying mojaloop, Kubernetes responds with the following errors:

Error: validation failed: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2", unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1"]

My Kubernetes version is 1.16.
How can I fix the problem with the API version?
From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.
How can I make Kubernetes use a not deprecated version or some other supported version?

I am new to Kubernetes and anyone who can support me I am happy

d-cubed
  • 1,034
  • 5
  • 30
  • 58
Dan
  • 1,495
  • 2
  • 7
  • 6

8 Answers8

232

In Kubernetes 1.16 some apis have been changed.

You can check which apis support current Kubernetes object using

$ kubectl api-resources | grep deployment
deployments                       deploy       apps                           true         Deployment

This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet.

You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1.

If this does not help, please add your YAML to the question.

EDIT As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:

1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script
2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.

maritio_o
  • 117
  • 10
PjoterS
  • 12,841
  • 1
  • 22
  • 54
  • can i downgrade the kubernettes since all the deployment yaml file for mojaloop are compatable with kuberntes version 1.15 so how can i downgrade or by making to downgrade can i get a soln then – Dan Oct 21 '19 at 10:24
  • 3
    I've check this mojaloop/mojaloop helm chart. Unfortunately, all templates with deployments have apiVersions: `extensions/v1beta1`. As one of the possible workaround is to `git clone` whole repo and replace apiVersion to `apps/v1` in all templates/deployment.yaml usinc script `find . -name 'deployment.yaml' | xargs -n 1 perl -pi -e 's/(apps\/v1beta2)|(extensions\/v1beta1)/apps\/v1/g'.` Second workaround might be just use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployent and StatefulSet. – PjoterS Oct 21 '19 at 10:34
  • @dan are you using `Minikube` or `Kubeadm`? – PjoterS Oct 21 '19 at 10:35
  • kubeadm i didn't use minikube – Dan Oct 21 '19 at 10:55
  • can u share me some steps to the installation of kubeadmn specfic to version 1.15 i can not find specfic resource considering installation of kubeadmn 1.15 – Dan Oct 22 '19 at 06:07
  • I've seen that you already created question about installation of kubeadm 1.15 and received a good answer. https://stackoverflow.com/a/58500250/11148139 – PjoterS Oct 25 '19 at 09:21
  • very usefull. thanks. – Alex Jun 20 '23 at 13:33
23

to convert an older Deployment to apps/v1, you can run:

kubectl convert -f ./my-deployment.yaml --output-version apps/v1
Ersoy
  • 8,816
  • 6
  • 34
  • 48
aren
  • 360
  • 2
  • 6
  • 2
    Solved it for me - thanks! Just a heads up for the future versions courtesy of the terminal: `kubectl convert is DEPRECATED and will be removed in a future version. "In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version"` – Morné Kruger Aug 18 '20 at 15:38
  • 2
    I'm getting `Error: unknown command "convert" for "kubectl"`. I have **kubectl** version `Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4",...` – Slaus Mar 13 '21 at 12:58
  • @Slaus `kubectl convert` no longer exists as of kubectl 1.22. You can install the separate `kubectl-convert` utility. Info about this can be found under the OS-specific install instructions: https://kubernetes.io/docs/tasks/tools/. For example, here is the MacOS guide: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#install-kubectl-convert-plugin – brainbag Jul 27 '21 at 16:24
12

You can change manually as an alternative. Fetch the helm chart:

helm fetch --untar stable/metabase

Access the chart folder:

cd ./metabase

Change API version:

sed -i 's|extensions/v1beta1|apps/v1|g' ./templates/deployment.yaml

Add spec.selector.matchLabels:

spec:
[...]
selector:
    matchLabels:
    app: {{ template "metabase.name" . }}
[...]

Finally install your altered chart:

helm install ./ \
  -n metabase \
  --namespace metabase \
  --set ingress.enabled=true \
  --set ingress.hosts={metabase.$(minikube ip).nip.io}

Enjoy!

Bruno Wego
  • 2,099
  • 3
  • 21
  • 38
10

I prefer kubectl explain.

# kubectl explain deploy
KIND:     Deployment
VERSION:  apps/v1

DESCRIPTION:
     Deployment enables declarative updates for Pods and ReplicaSets.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object metadata.

   spec <Object>
     Specification of the desired behavior of the Deployment.

   status       <Object>
     Most recently observed status of the Deployment.

With kubectl explain you can also see specific parameters of an object:

# kubectl explain Service.spec.externalTrafficPolicy
KIND:     Service
VERSION:  v1

FIELD:    externalTrafficPolicy <string>

DESCRIPTION:
     externalTrafficPolicy denotes if this Service desires to route external
     traffic to node-local or cluster-wide endpoints. "Local" preserves the
     client source IP and avoids a second hop for LoadBalancer and Nodeport type
     services, but risks potentially imbalanced traffic spreading. "Cluster"
     obscures the client source IP and may cause a second hop to another node,
     but should have good overall load-spreading.
suren
  • 7,817
  • 1
  • 30
  • 51
8

To put it simply, you don't force the current installation to use an outdated version of the API; you fix the version in your config files. If you want to check which version your current kube supports, run :

root@ubn64:~# kubectl api-versions | grep -i apps

apps/v1
jkinkead
  • 4,101
  • 21
  • 33
Shareef
  • 116
  • 1
  • 5
7

I was getting below error -
error: unable to recognize "deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

solution that worked for me -

modified the line from apiVersion: extensions/v1beta1 to apiVersion: apps/v1 in deployment.yaml

Reason - we had upgraded the K8 cluster hence this error occured.

Sanoj
  • 959
  • 11
  • 11
1

I was facing the same issue on a cluster that was upgraded to a version that does not support certain api versions (v1.17 and apps/v1beta2).

$ helm get manifest some-deployment
...
# Source: some-deployment/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: some-deployment
  labels:
...

Looking at the helm docs, it seems that the manifest is stored in the cluster for helm to reference, and it may include invalid api versions, leading to errors.

The 2 proposed methods are to either manually edit the manifest (a rather tedious multi-stage process), or use a helm plugin called mapkubeapis that does it automatically.

$ helm plugin install https://github.com/helm/helm-mapkubeapis

It can be run with the --dry-run flag to simulate the effects:

$ helm mapkubeapis --dry-run some-deployment
2021/02/15 09:33:29 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/02/15 09:33:29 Run without --dry-run to take the actions described below:
2021/02/15 09:33:29
2021/02/15 09:33:29 Release 'some-deployment' will be checked for deprecated or removed Kubernetes APIs and will be updated if necessary to supported API versions.
2021/02/15 09:33:29 Get release 'some-deployment' latest version.
2021/02/15 09:33:30 Check release 'some-deployment' for deprecated or removed APIs...
2021/02/15 09:33:30 Found deprecated or removed Kubernetes API:
"apiVersion: apps/v1beta2
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
2021/02/15 09:33:30 Finished checking release 'some-deployment' for deprecated or removed APIs.
2021/02/15 09:33:30 Deprecated or removed APIs exist, updating release: some-deployment.
2021/02/15 09:33:30 Map of release 'some-deployment' deprecated or removed APIs to supported versions, completed successfully.

and then run without the flag to apply the changes.

buzzedword
  • 3,108
  • 3
  • 21
  • 26
MasterAM
  • 16,283
  • 6
  • 45
  • 66
0

This was annoying me because I am testing lots of helm packages so I wrote a quick script - which could be modified to sort your workflow perhaps see below

New workflow First fetch the chart as a tgz to your working directory

helm fetch repo/chart

then in your working directly run bash script below - which I named helmk

helmk myreleasename mynamespace chart.tgz [any parameters for kubectl create]

Contents of helmk - need to edit your kubeconfig clustername to work

#!/bin/bash
echo usage $0 releasename namespace chart.tgz [createparameter1] [createparameter2] ... [createparameter n]
echo This will use your namespace then shift back to default so be careful!!
kubectl create namespace $2   #this will create harmless error if namespace exists have to ignore
kubectl config set-context MYCLUSTERNAME --namespace $2
helm template -n $1 --namespace $2 $3 | kubectl convert -f /dev/stdin | kubectl create --save-config=true ${@:4}  -f /dev/stdin
#note the --namespace parameter in helm template above seems to be ignored so we have to manually switch context
kubectl config set-context MYCLUSTERNAME --namespace default

It's a slightly dangerous hack since I manually switch to your new desired namespace context then back again so only to be used for single user devs really or comment that out.

You will get a warning about using the kubectl convert facility like this

If you need to edit the YAML to customise - just replace one of the /dev/stdin to intermediate files but It's probably better to get it up using "create" with a save-config as I have and then simply "apply" your changes which means that they will be recorded in kubernetes too. Good luck

john beck
  • 1
  • 2