Your goal sounds good. A little bit like bootstrapping OpenShift 4 nowadays.
Now, where to run MinIO is good question.
Not knowing how you use s3, beyond booting those VMs, nor what hypervisor/which terraform providers are used, I'll keep it generic:
1/ First question would be: do you need MinIO?
Bootstrapping your Kubernetes cluster from MinIO, you're hitting with a chicken-and-egg problem. Pulling your initial images & ignition files, the bare minimum would be some HTTP server.
1.1/ MinIO used by other non-kubernetes services
Assuming MinIO is somehow required for other bare-metal-related stuff: given it's required bootstrapping Kubernetes, I would consider hosting MinIO as a bare-metal application (or using good-old-virtualization), outside of Kubernetes.
Going there, I might even consider the option of MinIO alternatives, like Ceph: offering with both object and block storage, could be useful setting up dynamically-provisioned PVCs. (warning: the minimal Ceph cluster is usually larger than the bare-minimum-MinIO config. but if you're about to drop one of your k8s cluster, maybe that makes sense, ...)
1.2/ Try not to rely on MinIO for cluster bootstrap
Unless there's a reason to have MinIO involved in VMs bootstrap, I would just serve my ISO & ignition files with some nginx, lighttpd or apache.
Easier to maintain, re-create, ... And you might be able to configure that http server on whichever host generates your custom ISOs.
1.3/ MinIO required by kubernetes-hosted applications
Assuming your mostly use MinIO with your application: keep it in Kuberenetes
You don't even need a dedicated Kubernetes cluster: you may use nodeSelectors, taints, tolerations, ... such as your MinIO Pods would run on somewhat-dedicated Kubernetes workers, while your applications would run on "regular" nodes.
You can use separate namespaces, setup cluster RBAC, maybe even NetworkPolicies (assuming your SDN supports this) ... in a way that would isolate your storage from your workloads.
2/ Next question: do you HAVE TO re-create nodes (clusters?), redeploying your application?
Beyond testing your code, there isn't much value to re-creating everything: with Kubernetes, re-deploying an app from scratch should not require more than resetting your PVCs -- and maybe re-creating all objects, maybe delete/re-create the namespace hosting your applications.
Deploying and re-deploying workloads in Kubernetes "should" not require destroying and re-creating your k8s cluster itself, nor any of its nodes.
Still: you shouldn't be afraid to re-create nodes. Workers are disposable, if you want to destroy them, create new ones re-joining your cluster: makes perfect sense. Running on cloud (aws/azure/openstack/gce/...), we would usually setup some autoscaler: they will destroy instances and pop new ones, according to overall cluster resources usage. Scaling your clusters in and out is perfectly normal.