i am trying submit pyspark job from yarnclient. getting below error from RM without any further logs.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:231) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:773) at org.apache.hadoop.fs.DelegateToFileSystem.setPermission(DelegateToFileSystem.java:218) at org.apache.hadoop.fs.FilterFs.setPermission(FilterFs.java:266) at org.apache.hadoop.fs.FileContext$11.next(FileContext.java:1008) at org.apache.hadoop.fs.FileContext$11.next(FileContext.java:1004) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.setPermission(FileContext.java:1011) at org.apache.hadoop.yarn.util.FSDownload$3.run(FSDownload.java:483) at org.apache.hadoop.yarn.util.FSDownload$3.run(FSDownload.java:481) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:481) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:419) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) For more detailed output, check the application tracking page: https://.com:8090/cluster/app/application_1638972290118_64750 Then click on links to logs of each attempt. . Failing the application.
cluster is fine and other pyspark jobs running fine. Please help
Thanks in advance