0

I'm experiencing a weird issue where a k8s resource I previously created, then subsequently deleted via kubectl is being mysteriously coming back. It's a vanilla k8s cluster (no operators), and I should be the only user of the cluster.

$ kubectl get secret app-secret
NAME         TYPE     DATA   AGE
app-secret   Opaque   2      1d

$ kubectl delete secret app-secret
secret "app-secret" deleted

The next day:

$ kubectl get secret app-secret
NAME         TYPE     DATA   AGE
app-secret   Opaque   2      3h

I'd like to track down who/which entity brought it back.

I tried to review CronJobs because the resource recreation time appears to be the same walltime every 24 hrs. No luck there.

I also tried checking the YAML definition of the secret in hopes of finding some kind of 'creator' or 'author' field.

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: app-secret
  creationTimestamp: "2023-04-04T04:18:33Z"
  namespace: default
  resourceVersion: "39713521316"
  uid: 36eb0332-ea37-43d3-8034-4e47a8ebcd43
data:
  foo: YmFy

No luck there either.

I also tried kubectl event, but the events don't appear to include Secrets.

Here are the CRDs

kubectl get crd 
NAME                                                  CREATED AT
alertmanagers.monitoring.coreos.com                   2020-11-27T16:26:59Z
apiservices.management.cattle.io                      2021-10-06T19:38:28Z
apps.catalog.cattle.io                                2020-11-27T16:27:04Z
authconfigs.management.cattle.io                      2021-10-06T19:38:30Z
bgpconfigurations.crd.projectcalico.org               2020-11-27T16:05:00Z
bgppeers.crd.projectcalico.org                        2020-11-27T16:05:00Z
blockaffinities.crd.projectcalico.org                 2020-11-27T16:05:00Z
caliconodestatuses.crd.projectcalico.org              2022-10-04T02:32:09Z
clusterflows.logging.banzaicloud.io                   2022-01-14T14:43:04Z
clusterinformations.crd.projectcalico.org             2020-11-27T16:05:01Z
clusteroutputs.logging.banzaicloud.io                 2022-01-14T14:43:05Z
clusterregistrationtokens.management.cattle.io        2021-10-06T19:38:28Z
clusterrepos.catalog.cattle.io                        2020-11-27T16:27:04Z
clusters.management.cattle.io                         2020-11-27T16:27:03Z
features.management.cattle.io                         2020-11-27T16:27:04Z
felixconfigurations.crd.projectcalico.org             2020-11-27T16:04:49Z
flows.logging.banzaicloud.io                          2022-01-14T14:43:04Z
globalnetworkpolicies.crd.projectcalico.org           2020-11-27T16:05:01Z
globalnetworksets.crd.projectcalico.org               2020-11-27T16:05:01Z
groupmembers.management.cattle.io                     2021-10-06T19:38:31Z
groups.management.cattle.io                           2021-10-06T19:38:31Z
hostendpoints.crd.projectcalico.org                   2020-11-27T16:05:01Z
ipamblocks.crd.projectcalico.org                      2020-11-27T16:04:59Z
ipamconfigs.crd.projectcalico.org                     2020-11-27T16:05:00Z
ipamhandles.crd.projectcalico.org                     2020-11-27T16:05:00Z
ippools.crd.projectcalico.org                         2020-11-27T16:05:00Z
ipreservations.crd.projectcalico.org                  2022-10-04T02:32:16Z
kubecontrollersconfigurations.crd.projectcalico.org   2022-10-04T02:32:17Z
loggings.logging.banzaicloud.io                       2022-01-14T14:43:04Z
navlinks.ui.cattle.io                                 2021-10-06T19:38:28Z
networkpolicies.crd.projectcalico.org                 2020-11-27T16:05:02Z
networksets.crd.projectcalico.org                     2020-11-27T16:05:02Z
nodepools.kube.cloud.ovh.com                          2020-11-27T16:02:54Z
operations.catalog.cattle.io                          2020-11-27T16:27:04Z
outputs.logging.banzaicloud.io                        2022-01-14T14:43:05Z
preferences.management.cattle.io                      2020-11-27T16:27:04Z
prometheuses.monitoring.coreos.com                    2020-11-27T16:26:58Z
prometheusrules.monitoring.coreos.com                 2020-11-27T16:26:59Z
servicemonitors.monitoring.coreos.com                 2020-11-27T16:27:00Z
settings.management.cattle.io                         2020-11-27T16:27:04Z
tokens.management.cattle.io                           2021-10-06T19:38:31Z
userattributes.management.cattle.io                   2021-10-06T19:38:31Z
users.management.cattle.io                            2021-10-06T19:38:31Z
volumesnapshotclasses.snapshot.storage.k8s.io         2020-11-27T16:05:48Z
volumesnapshotcontents.snapshot.storage.k8s.io        2020-11-27T16:05:48Z
volumesnapshots.snapshot.storage.k8s.io               2020-11-27T16:05:48Z

I just deleted it again, but in 24h it's likely going to be back.

What are ways I might track this down?

paws
  • 1,263
  • 15
  • 23

1 Answers1

1

Check if you are using CD tools like ArgoCD, these tools could recreate resources to keep applications in desired state. Also check for any installed CRDs, some CRDs might take responsibility of keeping some resources in a desired state.

Aref Riant
  • 582
  • 3
  • 14
  • 1
    Additionally, it can be a `FederatedSecret` which attempts to create `Secret` if you use `kubefed` or any other multicluster management tool. Check the federated resources and delete them if any. – tuna Apr 04 '23 at 08:16
  • While it doesn't have ArgoCD, it's a Rancher-provisioned node, and I'm learning Rancher apparently includes Fleet these days. Will try that direction next – paws Apr 04 '23 at 18:19