K8s components are linked to each other by labels and selectors. There are just no built-in attributes of My-List-of-ReplicaSets or My-List-Of-Pods for a deployment. You can't get them from kubectl describe or kubectl get
As @Rico suggested above, you have to use label filters. But you can't simply use the labels that you specify in the deployment metafile because deployment will generate a random hash and use it as an additional label.
For example, I have a deployment and a standalone pod that share the same label app=http-svc. While the first two are managed by the deployment, the 3rd one is not and shouldn't be in the result.
ma.chi@~/k8s/deployments % kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
http-9c89b5578-6cqbp 1/1 Running 0 7s app=http-svc,pod-template-hash=574561134
http-9c89b5578-vwqbx 1/1 Running 0 7s app=http-svc,pod-template-hash=574561134
nginx-standalone 1/1 Running 0 7s app=http-svc
ma.chi@~/k8s/deployments %
The source file is
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: http-svc
name: http
spec:
replicas: 2
selector:
matchLabels:
app: http-svc
strategy: {}
template:
metadata:
labels:
app: http-svc
spec:
containers:
- image: nginx:1.9.1
name: nginx1
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: http-svc
name: nginx-standalone
spec:
containers:
- image: nginx:1.9.1
name: nginx1-standalone
To exact spot the containers created and managed by your deployment, you can use the script below(which is ugly, but this is the best I can do)
DEPLOY_NAME=http
RS_NAME=`kubectl describe deployment $DEPLOY_NAME|grep "^NewReplicaSet"|awk '{print $2}'`; echo $RS_NAME
POD_HASH_LABEL=`kubectl get rs $RS_NAME -o jsonpath="{.metadata.labels.pod-template-hash}"` ; echo $POD_HASH_LABEL
POD_NAMES=`kubectl get pods -l pod-template-hash=$POD_HASH_LABEL --show-labels | tail -n +2 | awk '{print $1}'`; echo $POD_NAMES