I'm learning Spark and wanted to run the simplest possible cluster consisting of two physical machines. I've done all the basic setup and it seems to be fine. The output of the automatic start script looks as follows:
[username@localhost sbin]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/username/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.master.Master-1-localhost.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /home/sername/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.worker.Worker-1-localhost.out
username@192.168.???.??: starting org.apache.spark.deploy.worker.Worker, logging to /home/username/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
so no errors here and seems that a Master node is running as well as two Worker nodes. However when I open the WebGUI at 192.168.???.??:8080, it only lists one worker - the local one. My issue is similar to that described here: Spark Clusters: worker info doesn't show on web UI but There's nothing going on in my /etc/hosts file. All it contains is:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
What am I missing? Both machines are running Fedora Workstation x86_64.