You can use the documentation here.
These instructions and the files below exist when you perform the installation of the cluster using the bootstrap node
You first need to add MESOS_ATTRIBUTES as here.
Just add the following line on the nodes you want under /var/lib/dcos/mesos-slave-common
(or whatever kind your node is (slave|master|public) ) and restart the agent service systemctl restart dcos-mesos-slave.service
TIP: you can check the environment files that are loaded on the unit file /etc/systemd/system/dcos-mesos-<mesos-node-type>.service
MESOS_ATTRIBUTES=<attribute>:<value>,<attribute>:<value> ...
Then following the documentation you can submit your spark job :
docker run mesosphere/spark:2.3.1-2.2.1-2-hadoop-2.6 /opt/spark/dist/bin/spark-submit --deploy-mode cluster ... --conf spark.mesos.constraints="<attribute>:<value>" --conf spark.mesos.driver.constraints="<attribute>:<value>" ...
Keep in mind that :
spark.mesos.constraints
is for the executors
spark.mesos.driver.constraints
is for the driver
depending if you want the drivers or the executor to access the data you want and the docker images will be created on the nodes with the attributes you specified.