-1

have a problem with geomesa failed on adding indexes, maybe someones know where problem is?

 geomesa-accumulo add-attribute-index -u root -p xxx -c xxx_dev_test -a asset_id --coverage full -f telemetry_values
DEBUG Looking up Accumulo Instance Id in Zookeeper for 5000 milliseconds.
DEBUG You can specify the Instance Id via the command line or
change the Zookeeper timeout by setting the system property 'instance.zookeeper.timeout'.
INFO  Running map reduce index job for attributes: [asset_id] with coverage: full...
ERROR Error encountered running attribute index command. Check hadoop's job history logs for more information.

Found that no jobs created in hadoop so no logs, but in tserver logs I found

2021-01-25 12:32:05,129 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
        at sun.nio.ch.IOUtil.write(IOUtil.java:65)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
        at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
        at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
        at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
        at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
        at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
2021-01-25 12:32:05,202 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
        at sun.nio.ch.IOUtil.write(IOUtil.java:65)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
        at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
        at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
        at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
        at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
        at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)

hadoop 3.1
accumulo 1.9.3
geomesa-accumulo 2.4.0
any advice?

geomesa logs, looks same as error as zookeeper

2021-01-25 13:29:38,762 DEBUG [org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator] IOException thrown
java.io.IOException: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
        at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:760)
        at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
        at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
        at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
        at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:158)
        at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.flush(ThriftTransportPool.java:346)
        at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
        at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.send_startMultiScan(TabletClientService.java:326)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:308)
        at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:684)
        ... 6 more
Caused by: java.nio.channels.ClosedByInterruptException
        at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:475)
        at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
        at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
        at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
        ... 13 more

here more logs from geomesa, seems some problem with job creation

2021-01-25 13:54:36,873 WARN  [org.apache.hadoop.mapred.LocalJobRunner] job_local1471203421_0001
java.lang.Exception: java.lang.NullPointerException
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.NullPointerException
        at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:103)
        at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:102)
        at org.locationtech.geomesa.utils.io.WithStore.apply(WithStore.scala:37)
        at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper.setup(AttributeIndexJob.scala:102)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

error from mapred job

Exception in thread "main" java.lang.NumberFormatException: For input string: "local1471203421"
        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at org.apache.hadoop.mapreduce.TypeConverter.toClusterTimeStamp(TypeConverter.java:111)
        at org.apache.hadoop.mapreduce.TypeConverter.toYarn(TypeConverter.java:82)
        at org.apache.hadoop.mapred.ClientServiceDelegate.<init>(ClientServiceDelegate.java:121)
        at org.apache.hadoop.mapred.ClientCache.getClient(ClientCache.java:68)
        at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
        at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:215)
        at org.apache.hadoop.mapreduce.tools.CLI.getJob(CLI.java:660)
        at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:470)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)
  • Are there any more detailed errors in `$GEOMESA_ACCUMULO_HOME/logs/geomesa.log`? – Emilio Lahr-Vivaz Jan 25 '21 at 13:08
  • main post updated with log, error same as zookeeper – Ivershin Viktor Jan 25 '21 at 13:35
  • that `NumberFormatException` seems to indicate that something is wrong with your hadoop cluster, but I'm not sure how to diagnose it any further or fix it. I would also say that AddAttributeIndexJob is not well supported at this point, so might need to be updated for hadoop 3. – Emilio Lahr-Vivaz Jan 25 '21 at 21:15
  • Please use tags a bit more sparingly when posting questions, @IvershinViktor; you added tags for 'accumulo' and 'hadoop', but these components are only marginally related to your question, if at all. Just because a library is part of a software stack does not mean that it is appropriate to tag a StackOverflow question as pertaining to that library. Users following the "accumulo" tag, for example, will not necessarily be interested in an error pertaining to geomesa. Please consider the audience when tagging questions in future. Thank you! – Christopher Jan 26 '21 at 03:04
  • @Christopher that problem seems related to hadoop cluster, so geomesa, accumulo and hadoop are needed in tags – Ivershin Viktor Jan 26 '21 at 09:54
  • @IvershinViktor please see this stack exchange meta answer containing tips on effective tag use: https://meta.stackexchange.com/a/18879 – Christopher Jan 27 '21 at 13:48

1 Answers1

0

hadoop 3.1 not support this feature, need 3.2 update