0

I am trying to run druid on a local vagrant machine. I use puppet to get archives, extract them etc. However I get a problem when trying to run historical and overlord node.

I use following code to start servers:

file_line { "configure_historical_server":
  path    => '/usr/share/druid-services-0.6.160/config/historical/runtime.properties',
  line    => 'druid.extensions.coordinates=["io.druid.extensions:druid-s3-  extensions:0.6.147","io.druid.extensions:druid-hdfs-storage:0.6.147"]',
  match   => '^druid.extensions.coordinates*',
  require => [ Exec["run_coordinator"] ],
}

exec { "run_historical":
  cwd     => "/usr/share/druid-services-0.6.160/",
  command => "nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/*:/usr/lib/hadoop/client/*:config/historical io.druid.cli.Main server historical&",
  path    => ["/bin", "/usr/bin"],
  require => [ File_Line["configure_historical_server"] ],

}

file_line { "configure_overlord_server":
  path    => '/usr/share/druid-services-0.6.160/config/overlord/runtime.properties',
  line    => 'druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.147","io.druid.extensions:druid-hdfs-storage:0.6.147"]',
  match   => '^druid.extensions.coordinates*',
  require => [ Exec["run_historical"] ],
}

exec { "run_overlord":
  cwd     => "/usr/share/druid-services-0.6.160/",
  command => "nohup java -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/*:/usr/lib/hadoop/client/*:config/overlord io.druid.cli.Main server overlord&",
  path    => ["/bin", "/usr/bin"],
  require => [ File_Line["configure_overlord_server"] ],
}

but both overlord and historical server fails due to the following error:

Caused by: java.io.FileNotFoundException: /home/vagrant/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.3.0/aether-e687f19b-733b-4348-a06f-e67797a26748-hadoop-hdfs-2.3.0.jar-in-progress (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.eclipse.aether.internal.impl.DefaultFileProcessor.copy(DefaultFileProcessor.java:151)
at org.eclipse.aether.internal.impl.DefaultFileProcessor.copy(DefaultFileProcessor.java:139)
at org.eclipse.aether.internal.impl.DefaultFileProcessor.move(DefaultFileProcessor.java:214)
at io.tesla.aether.connector.AetherRepositoryConnector$GetTask.rename(AetherRepositoryConnector.java:624)
at io.tesla.aether.connector.AetherRepositoryConnector$GetTask.run(AetherRepositoryConnector.java:404)
at io.tesla.aether.connector.AetherRepositoryConnector.get(AetherRepositoryConnector.java:232)
... 8 more

any idea how to fix this? when I start those servers from command line one after another (I wait until historical is started then I start overlord) everything works fine.

homar
  • 575
  • 1
  • 7
  • 19
  • Well, is the file that is not found created by the former service? – Felix Frank Nov 05 '14 at 23:41
  • yes but it should not be the case. When the vagrant server is started and I start both historical and overlord one after another from two different terminal tabs they crash with the same error. However if I wait for one of them to start and then start the second one they work fine. – homar Nov 07 '14 at 08:40
  • They *both* crash with a file not found error? – Felix Frank Nov 07 '14 at 10:44
  • yes, exactly. Both of them have similar error. – homar Nov 07 '14 at 15:15

0 Answers0