After speaking to the support team at DataDog, I managed to find out the following information relating to what the no_pod pods were.
Our Kubernetes check is getting the list of containers from the Kubernetes API, which exposes aggregated data. In the metric explorer configuration here, you can see a couple of containers named /docker and / that are getting picked up along with the other containers. Metrics with pod_name:no_pod that come from container_name:/ and container_name:/docker are just metrics aggregated across multiple containers. (So it makes sense that these are the highest values in your graphs.) If you don't want your graphs to show these aggregated container metrics though, you can clone the dashboard and then exclude these pods from the query. To do so, on the cloned dashboard, just edit the query in the JSON tab, and in the tag scope, add !pod_name:no_pod.
So it appears that these pods are the docker and root level containers running outside of the cluster and will always display unless you want to filter them out specifically which I now do.
Many thanks to the support guys at DataDog for looking into the issue for me and giving me a great explanation as to what the pods were and essentially confirming that I can just safely filter these out and not worry about them.