I installed .NET for Apache Spark using the following guide:
The Hello World worked.
Now I am trying to connect to and read from a Kafka cluster.
The following sample code should be able to get me connected to a Confluent Cloud Kafka cluster:
var df = spark
.ReadStream()
.Format("kafka")
.Option("kafka.bootstrap.servers", "my-bootstrap-server:9092")
.Option("subscribe", "wallet_txn_log")
.Option("startingOffsets", "earliest")
.Option("kafka.security.protocol", "SASL_SSL")
.Option("kafka.sasl.mechanism", "PLAIN")
.Option("kafka.sasl.jaas.config", "kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule required username=\"xxx\" password=\"xxx\";")
.Load();
When running the code, I get the following error:
Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".
The guide says that I need to add the spark-sql-kafka library in the correct version:
spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.13:3.2.1
When I run that, I get this error:
C:\Code\MySparkApp\bin\Debug\net6.0>spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.13:3.2.1 Error: Missing application resource.
I have installed spark-3.2.1-bin-hadoop2.7
I assume that spark-submit is not able to pull the correct image from Maven.
How to proceed from here?
Edit 1:
I figured I should use --packages in the whole "run" command.
Here is the latest command:
C:\Code\MySparkApp\bin\Debug\net6.0>spark-submit --class org.apache.spark.deploy.dotnet.DotnetRunner --master local C:\Code\MySparkApp\bin\Debug\net6.0\microsoft-spark-3-2_2.12-2.1.1.jar dotnet MySparkApp.dll C:\Code\MySparkApp\input.txt --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.1
Now again it is giving the error:
Failed to find data source: kafka
Maybe this is the wrong way to reference the Kafka library in a Spark .NET application?
Log output:
C:\Code\MySparkApp\bin\Debug\net6.0>spark-submit --class
> org.apache.spark.deploy.dotnet.DotnetRunner --master local
> C:\Code\MySparkApp\bin\Debug\net6.0\microsoft-spark-3-2_2.12-2.1.1.jar
> dotnet MySparkApp.dll C:\Code\MySparkApp\input.txt --packages
> org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.1 Using Spark's default
> log4j profile: org/apache/spark/log4j-defaults.properties 22/10/06
> 18:57:07 INFO DotnetRunner: Starting DotnetBackend with dotnet.
> 22/10/06 18:57:07 INFO DotnetBackend: The number of DotnetBackend
> threads is set to 10. 22/10/06 18:57:08 INFO DotnetRunner: Port number
> used by DotnetBackend is 55998 22/10/06 18:57:08 INFO DotnetRunner:
> Adding key=spark.jars and
> value=file:/C:/Code/MySparkApp/bin/Debug/net6.0/microsoft-spark-3-2_2.12-2.1.1.jar
> to environment 22/10/06 18:57:08 INFO DotnetRunner: Adding
> key=spark.app.name and
> value=org.apache.spark.deploy.dotnet.DotnetRunner to environment
> 22/10/06 18:57:08 INFO DotnetRunner: Adding key=spark.submit.pyFiles
> and value= to environment 22/10/06 18:57:08 INFO DotnetRunner: Adding
> key=spark.submit.deployMode and value=client to environment 22/10/06
> 18:57:08 INFO DotnetRunner: Adding key=spark.master and value=local to
> environment [2022-10-06T16:57:08.2893549Z] [DESKTOP-PR6Q966] [Info]
> [ConfigurationService] Using port 55998 for connection.
> [2022-10-06T16:57:08.2932382Z] [DESKTOP-PR6Q966] [Info] [JvmBridge]
> JvMBridge port is 55998 [2022-10-06T16:57:08.2943994Z]
> [DESKTOP-PR6Q966] [Info] [JvmBridge] The number of JVM backend thread
> is set to 10. The max number of concurrent sockets in JvmBridge is set
> to 7. 22/10/06 18:57:08 INFO SparkContext: Running Spark version 3.2.1
> 22/10/06 18:57:08 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where
> applicable 22/10/06 18:57:08 INFO ResourceUtils:
> ============================================================== 22/10/06 18:57:08 INFO ResourceUtils: No custom resources configured
> for spark.driver. 22/10/06 18:57:08 INFO ResourceUtils:
> ============================================================== 22/10/06 18:57:08 INFO SparkContext: Submitted application:
> word_count_sample 22/10/06 18:57:08 INFO ResourceProfile: Default
> ResourceProfile created, executor resources: Map(cores -> name: cores,
> amount: 1, script: , vendor: , memory -> name: memory, amount: 1024,
> script: , vendor: , offHeap -> name: offHeap, amount: 0, script: ,
> vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
> 22/10/06 18:57:08 INFO ResourceProfile: Limiting resource is cpu
> 22/10/06 18:57:08 INFO ResourceProfileManager: Added ResourceProfile
> id: 0 22/10/06 18:57:08 INFO SecurityManager: Changing view acls to:
> Kenan 22/10/06 18:57:08 INFO SecurityManager: Changing modify acls to:
> Kenan 22/10/06 18:57:08 INFO SecurityManager: Changing view acls
> groups to: 22/10/06 18:57:08 INFO SecurityManager: Changing modify
> acls groups to: 22/10/06 18:57:08 INFO SecurityManager:
> SecurityManager: authentication disabled; ui acls disabled; users
> with view permissions: Set(Kenan); groups with view permissions:
> Set(); users with modify permissions: Set(Kenan); groups with modify
> permissions: Set() 22/10/06 18:57:08 INFO Utils: Successfully started
> service 'sparkDriver' on port 56006. 22/10/06 18:57:08 INFO SparkEnv:
> Registering MapOutputTracker 22/10/06 18:57:08 INFO SparkEnv:
> Registering BlockManagerMaster 22/10/06 18:57:08 INFO
> BlockManagerMasterEndpoint: Using
> org.apache.spark.storage.DefaultTopologyMapper for getting topology
> information 22/10/06 18:57:08 INFO BlockManagerMasterEndpoint:
> BlockManagerMasterEndpoint up 22/10/06 18:57:08 INFO SparkEnv:
> Registering BlockManagerMasterHeartbeat 22/10/06 18:57:08 INFO
> DiskBlockManager: Created local directory at
> C:\Users\Kenan\AppData\Local\Temp\blockmgr-ca3af1bf-634a-45b2-879d-ca2c6db97299
> 22/10/06 18:57:08 INFO MemoryStore: MemoryStore started with capacity
> 366.3 MiB 22/10/06 18:57:08 INFO SparkEnv: Registering OutputCommitCoordinator 22/10/06 18:57:09 INFO Utils: Successfully
> started service 'SparkUI' on port 4040. 22/10/06 18:57:09 INFO
> SparkUI: Bound SparkUI to 0.0.0.0, and started at
> http://DESKTOP-PR6Q966.mshome.net:4040 22/10/06 18:57:09 INFO
> SparkContext: Added JAR
> file:/C:/Code/MySparkApp/bin/Debug/net6.0/microsoft-spark-3-2_2.12-2.1.1.jar
> at
> spark://DESKTOP-PR6Q966.mshome.net:56006/jars/microsoft-spark-3-2_2.12-2.1.1.jar
> with timestamp 1665075428422 22/10/06 18:57:09 INFO Executor: Starting
> executor ID driver on host DESKTOP-PR6Q966.mshome.net 22/10/06
> 18:57:09 INFO Executor: Fetching
> spark://DESKTOP-PR6Q966.mshome.net:56006/jars/microsoft-spark-3-2_2.12-2.1.1.jar
> with timestamp 1665075428422 22/10/06 18:57:09 INFO
> TransportClientFactory: Successfully created connection to
> DESKTOP-PR6Q966.mshome.net/172.24.208.1:56006 after 11 ms (0 ms spent
> in bootstraps) 22/10/06 18:57:09 INFO Utils: Fetching
> spark://DESKTOP-PR6Q966.mshome.net:56006/jars/microsoft-spark-3-2_2.12-2.1.1.jar
> to
> C:\Users\Kenan\AppData\Local\Temp\spark-91d1752d-a8f0-42c7-a340-e4e7c3ea84b0\userFiles-6a2073f2-d8d9-4a42-8aac-b5c0c7142763\fetchFileTemp6627445237981542962.tmp
> 22/10/06 18:57:09 INFO Executor: Adding
> file:/C:/Users/Kenan/AppData/Local/Temp/spark-91d1752d-a8f0-42c7-a340-e4e7c3ea84b0/userFiles-6a2073f2-d8d9-4a42-8aac-b5c0c7142763/microsoft-spark-3-2_2.12-2.1.1.jar
> to class loader 22/10/06 18:57:09 INFO Utils: Successfully started
> service 'org.apache.spark.network.netty.NettyBlockTransferService' on
> port 56030. 22/10/06 18:57:09 INFO NettyBlockTransferService: Server
> created on DESKTOP-PR6Q966.mshome.net:56030 22/10/06 18:57:09 INFO
> BlockManager: Using
> org.apache.spark.storage.RandomBlockReplicationPolicy for block
> replication policy 22/10/06 18:57:09 INFO BlockManagerMaster:
> Registering BlockManager BlockManagerId(driver,
> DESKTOP-PR6Q966.mshome.net, 56030, None) 22/10/06 18:57:09 INFO
> BlockManagerMasterEndpoint: Registering block manager
> DESKTOP-PR6Q966.mshome.net:56030 with 366.3 MiB RAM,
> BlockManagerId(driver, DESKTOP-PR6Q966.mshome.net, 56030, None)
> 22/10/06 18:57:09 INFO BlockManagerMaster: Registered BlockManager
> BlockManagerId(driver, DESKTOP-PR6Q966.mshome.net, 56030, None)
> 22/10/06 18:57:09 INFO BlockManager: Initialized BlockManager:
> BlockManagerId(driver, DESKTOP-PR6Q966.mshome.net, 56030, None)
> 22/10/06 18:57:09 INFO SharedState: Setting
> hive.metastore.warehouse.dir ('null') to the value of
> spark.sql.warehouse.dir. 22/10/06 18:57:09 INFO SharedState: Warehouse
> path is 'file:/C:/Code/MySparkApp/bin/Debug/net6.0/spark-warehouse'.
> 22/10/06 18:57:10 INFO InMemoryFileIndex: It took 21 ms to list leaf
> files for 1 paths. 22/10/06 18:57:12 INFO FileSourceStrategy: Pushed
> Filters: 22/10/06 18:57:12 INFO FileSourceStrategy: Post-Scan Filters:
> (size(split(value#0, , -1), true) > 0),isnotnull(split(value#0, ,
> -1)) 22/10/06 18:57:12 INFO FileSourceStrategy: Output Data Schema: struct<value: string> 22/10/06 18:57:12 INFO CodeGenerator: Code
> generated in 181.3829 ms 22/10/06 18:57:12 INFO MemoryStore: Block
> broadcast_0 stored as values in memory (estimated size 286.3 KiB, free
> 366.0 MiB) 22/10/06 18:57:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 24.1 KiB,
> free 366.0 MiB) 22/10/06 18:57:12 INFO BlockManagerInfo: Added
> broadcast_0_piece0 in memory on DESKTOP-PR6Q966.mshome.net:56030
> (size: 24.1 KiB, free: 366.3 MiB) 22/10/06 18:57:12 INFO SparkContext:
> Created broadcast 0 from showString at <unknown>:0 22/10/06 18:57:12
> INFO FileSourceScanExec: Planning scan with bin packing, max size:
> 4194406 bytes, open cost is considered as scanning 4194304 bytes.
> 22/10/06 18:57:12 INFO DAGScheduler: Registering RDD 3 (showString at
> <unknown>:0) as input to shuffle 0 22/10/06 18:57:12 INFO
> DAGScheduler: Got map stage job 0 (showString at <unknown>:0) with 1
> output partitions 22/10/06 18:57:12 INFO DAGScheduler: Final stage:
> ShuffleMapStage 0 (showString at <unknown>:0) 22/10/06 18:57:12 INFO
> DAGScheduler: Parents of final stage: List() 22/10/06 18:57:12 INFO
> DAGScheduler: Missing parents: List() 22/10/06 18:57:12 INFO
> DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at
> showString at <unknown>:0), which has no missing parents 22/10/06
> 18:57:12 INFO MemoryStore: Block broadcast_1 stored as values in
> memory (estimated size 38.6 KiB, free 366.0 MiB) 22/10/06 18:57:12
> INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory
> (estimated size 17.6 KiB, free 365.9 MiB) 22/10/06 18:57:12 INFO
> BlockManagerInfo: Added broadcast_1_piece0 in memory on
> DESKTOP-PR6Q966.mshome.net:56030 (size: 17.6 KiB, free: 366.3 MiB)
> 22/10/06 18:57:12 INFO SparkContext: Created broadcast 1 from
> broadcast at DAGScheduler.scala:1478 22/10/06 18:57:13 INFO
> DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0
> (MapPartitionsRDD[3] at showString at <unknown>:0) (first 15 tasks are
> for partitions Vector(0)) 22/10/06 18:57:13 INFO TaskSchedulerImpl:
> Adding task set 0.0 with 1 tasks resource profile 0 22/10/06 18:57:13
> INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0)
> (DESKTOP-PR6Q966.mshome.net, executor driver, partition 0,
> PROCESS_LOCAL, 4850 bytes) taskResourceAssignments Map() 22/10/06
> 18:57:13 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 22/10/06
> 18:57:13 INFO CodeGenerator: Code generated in 10.268 ms 22/10/06
> 18:57:13 INFO CodeGenerator: Code generated in 4.9722 ms 22/10/06
> 18:57:13 INFO CodeGenerator: Code generated in 6.0205 ms 22/10/06
> 18:57:13 INFO CodeGenerator: Code generated in 5.18 ms 22/10/06
> 18:57:13 INFO FileScanRDD: Reading File path:
> file:///C:/Code/MySparkApp/input.txt, range: 0-102, partition values:
> [empty row] 22/10/06 18:57:13 INFO LineRecordReader: Found UTF-8 BOM
> and skipped it 22/10/06 18:57:13 INFO Executor: Finished task 0.0 in
> stage 0.0 (TID 0). 2845 bytes result sent to driver 22/10/06 18:57:13
> INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 319 ms
> on DESKTOP-PR6Q966.mshome.net (executor driver) (1/1) 22/10/06
> 18:57:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have
> all completed, from pool 22/10/06 18:57:13 INFO DAGScheduler:
> ShuffleMapStage 0 (showString at <unknown>:0) finished in 0.379 s
> 22/10/06 18:57:13 INFO DAGScheduler: looking for newly runnable stages
> 22/10/06 18:57:13 INFO DAGScheduler: running: Set() 22/10/06 18:57:13
> INFO DAGScheduler: waiting: Set() 22/10/06 18:57:13 INFO DAGScheduler:
> failed: Set() 22/10/06 18:57:13 INFO ShufflePartitionsUtil: For
> shuffle(0), advisory target size: 67108864, actual target size
> 1048576, minimum partition size: 1048576 22/10/06 18:57:13 INFO
> CodeGenerator: Code generated in 11.5441 ms 22/10/06 18:57:13 INFO
> HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is
> set to true, but current version of codegened fast hashmap does not
> support this aggregate. 22/10/06 18:57:13 INFO CodeGenerator: Code
> generated in 10.7919 ms 22/10/06 18:57:13 INFO SparkContext: Starting
> job: showString at <unknown>:0 22/10/06 18:57:13 INFO DAGScheduler:
> Got job 1 (showString at <unknown>:0) with 1 output partitions
> 22/10/06 18:57:13 INFO DAGScheduler: Final stage: ResultStage 2
> (showString at <unknown>:0) 22/10/06 18:57:13 INFO DAGScheduler:
> Parents of final stage: List(ShuffleMapStage 1) 22/10/06 18:57:13 INFO
> DAGScheduler: Missing parents: List() 22/10/06 18:57:13 INFO
> DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[7] at
> showString at <unknown>:0), which has no missing parents 22/10/06
> 18:57:13 INFO MemoryStore: Block broadcast_2 stored as values in
> memory (estimated size 37.4 KiB, free 365.9 MiB) 22/10/06 18:57:13
> INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory
> (estimated size 17.7 KiB, free 365.9 MiB) 22/10/06 18:57:13 INFO
> BlockManagerInfo: Added broadcast_2_piece0 in memory on
> DESKTOP-PR6Q966.mshome.net:56030 (size: 17.7 KiB, free: 366.2 MiB)
> 22/10/06 18:57:13 INFO SparkContext: Created broadcast 2 from
> broadcast at DAGScheduler.scala:1478 22/10/06 18:57:13 INFO
> DAGScheduler: Submitting 1 missing tasks from ResultStage 2
> (MapPartitionsRDD[7] at showString at <unknown>:0) (first 15 tasks are
> for partitions Vector(0)) 22/10/06 18:57:13 INFO TaskSchedulerImpl:
> Adding task set 2.0 with 1 tasks resource profile 0 22/10/06 18:57:13
> INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 1)
> (DESKTOP-PR6Q966.mshome.net, executor driver, partition 0, NODE_LOCAL,
> 4453 bytes) taskResourceAssignments Map() 22/10/06 18:57:13 INFO
> Executor: Running task 0.0 in stage 2.0 (TID 1) 22/10/06 18:57:13 INFO
> BlockManagerInfo: Removed broadcast_1_piece0 on
> DESKTOP-PR6Q966.mshome.net:56030 in memory (size: 17.6 KiB, free:
> 366.3 MiB) 22/10/06 18:57:13 INFO ShuffleBlockFetcherIterator: Getting 1 (864.0 B) non-empty blocks including 1 (864.0 B) local and 0 (0.0 B)
> host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks
> 22/10/06 18:57:13 INFO ShuffleBlockFetcherIterator: Started 0 remote
> fetches in 8 ms 22/10/06 18:57:13 INFO Executor: Finished task 0.0 in
> stage 2.0 (TID 1). 6732 bytes result sent to driver 22/10/06 18:57:13
> INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 1) in 124 ms
> on DESKTOP-PR6Q966.mshome.net (executor driver) (1/1) 22/10/06
> 18:57:13 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have
> all completed, from pool 22/10/06 18:57:13 INFO DAGScheduler:
> ResultStage 2 (showString at <unknown>:0) finished in 0.136 s 22/10/06
> 18:57:13 INFO DAGScheduler: Job 1 is finished. Cancelling potential
> speculative or zombie tasks for this job 22/10/06 18:57:13 INFO
> TaskSchedulerImpl: Killing all running tasks in stage 2: Stage
> finished 22/10/06 18:57:13 INFO DAGScheduler: Job 1 finished:
> showString at <unknown>:0, took 0.149812 s 22/10/06 18:57:13 INFO
> CodeGenerator: Code generated in 7.0234 ms 22/10/06 18:57:13 INFO
> CodeGenerator: Code generated in 7.0701 ms
> +------+-----+ | word|count|
> +------+-----+ | .NET| 3| |Apache| 2| | This| 2| | Spark| 2| | app| 2| | World| 1| | for| 1| |counts| 1| |
> words| 1| | with| 1| | uses| 1| | Hello| 1|
> +------+-----+
>
> Moo 22/10/06 18:57:13 ERROR DotnetBackendHandler: Failed to execute
> 'load' on 'org.apache.spark.sql.streaming.DataStreamReader' with
> args=() [2022-10-06T16:57:13.6895055Z] [DESKTOP-PR6Q966] [Error]
> [JvmBridge] JVM method execution failed: Nonstatic method 'load'
> failed for class '22' when called with no arguments
> [2022-10-06T16:57:13.6895347Z] [DESKTOP-PR6Q966] [Error] [JvmBridge]
> org.apache.spark.sql.AnalysisException: Failed to find data source:
> kafka. Please deploy the application as per the deployment section of
> "Structured Streaming + Kafka Integration Guide".
> at org.apache.spark.sql.errors.QueryCompilationErrors$.failedToFindKafkaDataSourceError(QueryCompilationErrors.scala:1037)
> at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:668)
> at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:156)
> at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:143)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at org.apache.spark.api.dotnet.DotnetBackendHandler.handleMethodCall(DotnetBackendHandler.scala:165)
> at org.apache.spark.api.dotnet.DotnetBackendHandler.$anonfun$handleBackendRequest$2(DotnetBackendHandler.scala:105)
> at org.apache.spark.api.dotnet.ThreadPool$$anon$1.run(ThreadPool.scala:34)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
>
> [2022-10-06T16:57:13.6986588Z] [DESKTOP-PR6Q966] [Exception]
> [JvmBridge] JVM method execution failed: Nonstatic method 'load'
> failed for class '22' when called with no arguments at
> Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic,
> Object classNameOrJvmObjectReference, String methodName, Object[]
> args) Unhandled exception. System.Exception: JVM method execution
> failed: Nonstatic method 'load' failed for class '22' when called with
> no arguments ---> Microsoft.Spark.JvmException:
> org.apache.spark.sql.AnalysisException: Failed to find data source:
> kafka. Please deploy the application as per the deployment section of
> "Structured Streaming + Kafka Integration Guide".
> at org.apache.spark.sql.errors.QueryCompilationErrors$.failedToFindKafkaDataSourceError(QueryCompilationErrors.scala:1037)
> at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:668)
> at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:156)
> at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:143)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at org.apache.spark.api.dotnet.DotnetBackendHandler.handleMethodCall(DotnetBackendHandler.scala:165)
> at org.apache.spark.api.dotnet.DotnetBackendHandler.$anonfun$handleBackendRequest$2(DotnetBackendHandler.scala:105)
> at org.apache.spark.api.dotnet.ThreadPool$$anon$1.run(ThreadPool.scala:34)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at
..