9

I am trying to use a WEB URL from spark-shell using textFile method, but getting error. Probably this is not the right way. So can someone please tell me how to access a web URL from spark context.

I am using spark version 1.3.0 ; Scala version 2.10.4 and Java 1.7.0_21

hduser@ubuntu:~$ spark-shell
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Welcome to
      __              
     / /   / /
    \ \/  \/  `/ _/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.3.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_21)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
SQL context available as sqlContext.

scala> val pagecount = sc.textFile( "https://www.google.co.in/?gws_rd=ssl" )
pagecount: org.apache.spark.rdd.RDD[String] = https://www.google.co.in/?gws_rd=ssl MapPartitionsRDD[1] at textFile at <console>:21

scala> pagecount.count()
java.io.IOException: No FileSystem for scheme: https
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1383)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
 at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:176)
 at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
 at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
 at scala.Option.getOrElse(Option.scala:120)
 at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
 at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
 at scala.Option.getOrElse(Option.scala:120)
 at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:1511)
 at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:24)
 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29)
 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
 at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
 at $iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
 at $iwC$$iwC$$iwC.<init>(<console>:37)
 at $iwC$$iwC.<init>(<console>:39)
 at $iwC.<init>(<console>:41)
 at <init>(<console>:43)
 at .<init>(<console>:47)
 at .<clinit>(<console>)
 at .<init>(<console>:7)
 at .<clinit>(<console>)
 at $print(<console>)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
 at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
 at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
 at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
 at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
 at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
 at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
 at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
 at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
 at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
 at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
 at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
 at org.apache.spark.repl.Main$.main(Main.scala:31)
 at org.apache.spark.repl.Main.main(Main.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Koushik Chandra
  • 1,565
  • 12
  • 37
  • 73

1 Answers1

15

You cannot get url content using textFile directly. textFile is to :

Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI

You see, HTTP/HTTPS url is not included.

You can get the content first, and then make it as RDDs.

val html = scala.io.Source.fromURL("https://spark.apache.org/").mkString
val list = html.split("\n").filter(_ != "")
val rdds = sc.parallelize(list)
val count = rdds.filter(_.contains("Spark")).count()
Florian Brucker
  • 9,621
  • 3
  • 48
  • 81
chenzhongpu
  • 6,193
  • 8
  • 41
  • 79
  • But when I am trying to count a specific word I am getting Java exception "Array Index Out Of Bound" scala> val html = scala.io.Source.fromURL("https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext") scala> val rdds = sc.parallelize(List(html)) scala> rdds.filter(_.contains("Spark")).count() 15/04/20 00:30:28 ERROR TaskSetManager: Failed to serialize task 7, not attempting to retry it. java.lang.reflect.InvocationTargetException – Koushik Chandra Apr 20 '15 at 07:32
  • are you going to fetch the url string itself or the web content ? – chenzhongpu Apr 20 '15 at 08:29
  • The URL I want to access is :https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext and want to find out "Spark" keyword count – Koushik Chandra Apr 20 '15 at 08:51
  • 1
    The last command still expecting something more val count = rdds.filter(_.contains("Spark").count() | – Koushik Chandra Apr 20 '15 at 09:15
  • 1
    A brace was missing in the last command. It should be val count = rdds.filter(_.contains("Spark")).count() – Koushik Chandra Apr 20 '15 at 09:30
  • 1
    This attempts to read the whole file at once; I've a similar use case but the file is too large to be read at once. I did [this](http://stackoverflow.com/questions/42601126/spark-java-how-to-move-data-from-http-source-to-couchbase-sink). – Abhijit Sarkar Mar 06 '17 at 01:27
  • @chenzhongpu Does first two lines of code gets executed in the Driver? If so if we have 20 gigs of files to read from URL, Does it break the driver. – loneStar Jan 30 '18 at 20:51