I'm looking to execute Scala code in text file using GCP with Spark Shell.
Using GCP (Google Cloud Platform), I've done the following:
Created a DataProc instance and named it
gcp-cluster-091122
.Created a Cloud Bucket and named it
gcp-bucket-091122p
.Created a simple text file called
1.txt
and uploaded the file into the recently created GCP bucket,gcp-bucket-091122
.Logged onto the VM-instance SSH-in-browser and entered the command
spark-shell
to access thescala >
prompt.
From here, how does one read/execute a particular file uploaded into a GCP bucket? I've researched this topic, but I've been unsuccessful.
I've also used "GCS Fuse" plug-in code to successfully mount the GCP bucket, gcp-bucket-091122
onto a created local file directory in SSH called lfs-directory-091122
.
So an additional question would be how to execute a file located in the local file directory using Spark Shell?