0

i am trying submit pyspark job from yarnclient. getting below error from RM without any further logs.

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:231) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:773) at org.apache.hadoop.fs.DelegateToFileSystem.setPermission(DelegateToFileSystem.java:218) at org.apache.hadoop.fs.FilterFs.setPermission(FilterFs.java:266) at org.apache.hadoop.fs.FileContext$11.next(FileContext.java:1008) at org.apache.hadoop.fs.FileContext$11.next(FileContext.java:1004) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.setPermission(FileContext.java:1011) at org.apache.hadoop.yarn.util.FSDownload$3.run(FSDownload.java:483) at org.apache.hadoop.yarn.util.FSDownload$3.run(FSDownload.java:481) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:481) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:419) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) For more detailed output, check the application tracking page: https://.com:8090/cluster/app/application_1638972290118_64750 Then click on links to logs of each attempt. . Failing the application.

cluster is fine and other pyspark jobs running fine. Please help

Thanks in advance

Ramakrishna
  • 1,170
  • 2
  • 10
  • 17
  • 1
    As output says, `For more detailed output, check the application tracking page: https://.... Then click on links to logs of each attempt`. Please give the more detailed logs – OneCricketeer Jan 17 '22 at 14:05
  • when go to application tracking page, it says org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby – Ramakrishna Jan 17 '22 at 16:46

1 Answers1

0

What do you mean by "cluster is fine and other pyspark jobs running fine"? Did you run them on Yarn or just on Standalone mode?

However, I think it's better to check your yarn cluster first to see if it works (without spark).
you can do it using hadoop MapR examples:

yarn jar $HadoopDir/share/hadoop/mapreduce/hadoop-mapreduce-examples-$version.jar wordcount inputFilePath OutputDir

Check link 1 and link 2 too. They may help.